The problem of two order statistics detection schemes for the detection of a spatially distributed target in white Gaussian noise are studied.When the number of strong scattering cells is known,we first show an optima...The problem of two order statistics detection schemes for the detection of a spatially distributed target in white Gaussian noise are studied.When the number of strong scattering cells is known,we first show an optimal detector,which requires many processing channels.The structure of such optimal detector is complex.Therefore,a simpler quasi-optimal detector is then introduced.The quasi-optimal detector,called the strong scattering cells’ number dependent order statistics(SND-OS) detector,takes the form of an average of maximum strong scattering cells with a known number.If the number of strong scattering cells is unknown in real situation,the multi-channel order statistics(MC-OS) detector is used.In each channel,a various number of maximums scattered from target are averaged.Then,the false alarm probability analysis and thresholds sets for each channel are given,following the detection results presented by means of Monte Carlo simulation strategy based on simulated target model and three measured targets.In particular,the theoretical analysis and simulation results highlight that the MC-OS detector can efficiently detect range-spread targets in white Gaussian noise.展开更多
The novel closed-form expressions for the average channel capacity of dual selection diversity is presented, as well as, the bit-error rate (BER) of several coherent and noncoherent digital modulation schemes in the...The novel closed-form expressions for the average channel capacity of dual selection diversity is presented, as well as, the bit-error rate (BER) of several coherent and noncoherent digital modulation schemes in the correlated Weibull fading channels with nonidentical statisticS. The results are expressed in terms of Meijer's Gfunction, which can be easily evaluated numerically. The simulation results are presented to validate the proposed theoretical analysis and to examine the effects of the fading severity on the concerned quantities.展开更多
CFAR technique is widely used in radar targets detection fields. Traditional algorithm is cell averaging (CA), which can give a good detection performance in a relatively ideal environment. Recently, censoring techniq...CFAR technique is widely used in radar targets detection fields. Traditional algorithm is cell averaging (CA), which can give a good detection performance in a relatively ideal environment. Recently, censoring technique is adopted to make the detector perform robustly. Ordered statistic (OS) and trimmed mean (TM) methods are proposed. TM methods treat the reference samples which participate in clutter power estimates equally, but this processing will not realize the effective estimates of clutter power. Therefore, in this paper a quasi best weighted (QBW) order statistics algorithm is presented. In special cases, QBW reduces to CA and the censored mean level detector (CMLD).展开更多
An on-line blind source separation (BSS) algorithm is presented in this paper under the assumption that sources are temporarily correlated signals. By using only some of the observed samples in a recursive calculati...An on-line blind source separation (BSS) algorithm is presented in this paper under the assumption that sources are temporarily correlated signals. By using only some of the observed samples in a recursive calculation, the whitening matrix and the rotation matrix could be approximately obtained through the measurement of only one cost function. SimNations show goad performance of the algorithm.展开更多
We propose an information theory based objective function for measuring the statistics independent of source signals. Then, we develop a learlling algorithm for blind separation of nonstationary signals by minimizing ...We propose an information theory based objective function for measuring the statistics independent of source signals. Then, we develop a learlling algorithm for blind separation of nonstationary signals by minimizing the objective function, in which the property of nonstationary and direct architecture neural network is applied. The analysis demonstrates the equiralence of two neural architectures in some special cases. The computer simulation shows the validity of the proposed algorithm. We give the performance surface of the object function at the last of the paper.展开更多
A single-channel speech enhancement method of noisy speech signals at very low signal-to-noise ratios is presented, which is based on masking properties of the human auditory system and power spectral density estimati...A single-channel speech enhancement method of noisy speech signals at very low signal-to-noise ratios is presented, which is based on masking properties of the human auditory system and power spectral density estimation of non stationary noise. It allows for an automatic adaptation in time and frequency of the parametric enhancement system, and finds the best tradeoff among the amount of noise reduction, the speech distortion, and the level of musical residual noise based on a criterion correlated with perception and SNR. This leads to a significant reduction of the unnatural structure of the residual noise. The results with several noise types show that the enhanced speech is more pleasant to a human listener.展开更多
For radar targets flying at low altitude, multiple pathways produce fade or enhancement relative to the level that would be expected in a free-space environment. In this paper, a new detec- tion method based on a wide...For radar targets flying at low altitude, multiple pathways produce fade or enhancement relative to the level that would be expected in a free-space environment. In this paper, a new detec- tion method based on a wide-ranging multi-frequency radar for low angle targets is proposed. Sequential transmitting multiple pulses with different frequencies are first applied to decorrelate the cohe- rence of the direct and reflected echoes. After receiving all echoes, the multi-frequency samples are arranged in a sort descending ac- cording to the amplitude. Some high amplitude echoes in the same range cell are accumulated to improve the signal-to-noise ratio and the optimal number of high amplitude echoes is analyzed and given by experiments. Finally, simulation results are presented to verify the effectiveness of the method.展开更多
This paper concentrates on addressing the hypersonic glide vehicle(HGV)tracking problem considering the high maneuverability and non-stationary heavy-tailed measurement noise without prior statistics in complicated fl...This paper concentrates on addressing the hypersonic glide vehicle(HGV)tracking problem considering the high maneuverability and non-stationary heavy-tailed measurement noise without prior statistics in complicated flight environments.Since the interacting multiple model(IMM)filtering is famous with its ability to cover the movement property of motion models,the problem is formulated as modeling the non-stationary heavy-tailed measurement noise without any prior statistics in the IMM framework.Firstly,without any prior statistics,the Gaussian-inverse Wishart distribution is embedded in the improved Pearson type-VII(PTV)distribution,which can adaptively adjust the parameters to model the non-stationary heavytailed measurement noise.Besides,degree of freedom(DOF)parameters are surrogated by the maximization of evidence lower bound(ELBO)in the variational Bayesian optimization framework instead of fixed value to handle uncertain non-Gaussian degrees.Then,this paper analytically derives fusion forms based on the maximum Versoria fusion criterion instead of the moment matching approach,which can provide a precise approximation for the PTV mixture distribution in the mixing and output steps combined with the weight Kullback-Leibler average theory.Simulation results demonstrate the superiority and robustness of the proposed algorithm in typical HGVs tracking when the measurement noise without priori statistics is non-stationary.展开更多
Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the ...Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.展开更多
Risk management often plays an important role in decision making un-der uncertainty.In quantitative risk management,assessing and optimizing risk metrics requires eficient computing techniques and reliable theoretical...Risk management often plays an important role in decision making un-der uncertainty.In quantitative risk management,assessing and optimizing risk metrics requires eficient computing techniques and reliable theoretical guarantees.In this pa-per,we introduce several topics on quantitative risk management and review some of the recent studies and advancements on the topics.We consider several risk metrics and study decision models that involve the metrics,with a main focus on the related com-puting techniques and theoretical properties.We show that stochastic optimization,as a powerful tool,can be leveraged to effectively address these problems.展开更多
Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure ...Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.展开更多
Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,su...Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.展开更多
Accurate assessment of coal brittleness is crucial in the design of coal seam drilling and underground coal mining operations.This study proposes a method for evaluating the brittleness of gas-bearing coal based on a ...Accurate assessment of coal brittleness is crucial in the design of coal seam drilling and underground coal mining operations.This study proposes a method for evaluating the brittleness of gas-bearing coal based on a statistical damage constitutive model and energy evolution mechanisms.Initially,integrating the principle of effective stress and the Hoek-Brown criterion,a statistical damage constitutive model for gas-bearing coal is established and validated through triaxial compression tests under different gas pressures to verify its accuracy and applicability.Subsequently,employing energy evolution mechanism,two energy characteristic parameters(elastic energy proportion and dissipated energy proportion)are analyzed.Based on the damage stress thresholds,the damage evolution characteristics of gas bearing coal were explored.Finally,by integrating energy characteristic parameters with damage parameters,a novel brittleness index is proposed.The results demonstrate that the theoretical curves derived from the statistical damage constitutive model closely align with the test curves,accurately reflecting the stress−strain characteristics of gas-bearing coal and revealing the stress drop and softening characteristics of coal in the post-peak stage.The shape parameter and scale parameter represent the brittleness and macroscopic strength of the coal,respectively.As gas pressure increases from 1 to 5 MPa,the shape parameter and the scale parameter decrease by 22.18%and 60.45%,respectively,indicating a reduction in both brittleness and strength of the coal.Parameters such as maximum damage rate and peak elastic energy storage limit positively correlate with coal brittleness.The brittleness index effectively captures the brittleness characteristics and reveals a decrease in brittleness and an increase in sensitivity to plastic deformation under higher gas pressure conditions.展开更多
This paper developed a statistical damage constitutive model for deep rock by considering the effects of external load and thermal treatment temperature based on the distortion energy.The model parameters were determi...This paper developed a statistical damage constitutive model for deep rock by considering the effects of external load and thermal treatment temperature based on the distortion energy.The model parameters were determined through the extremum features of stress−strain curve.Subsequently,the model predictions were compared with experimental results of marble samples.It is found that when the treatment temperature rises,the coupling damage evolution curve shows an S-shape and the slope of ascending branch gradually decreases during the coupling damage evolution process.At a constant temperature,confining pressure can suppress the expansion of micro-fractures.As the confining pressure increases the rock exhibits ductility characteristics,and the shape of coupling damage curve changes from an S-shape into a quasi-parabolic shape.This model can well characterize the influence of high temperature on the mechanical properties of deep rock and its brittleness-ductility transition characteristics under confining pressure.Also,it is suitable for sandstone and granite,especially in predicting the pre-peak stage and peak stress of stress−strain curve under the coupling action of confining pressure and high temperature.The relevant results can provide a reference for further research on the constitutive relationship of rock-like materials and their engineering applications.展开更多
基金supported by the Major Program of National Natural Science Foundation of China (10990012)the National Natural Science Foundation of China (61201296,61271024)+1 种基金the Fundamental Research Funds for the Central Universities (K5051202037)Guangxi Key Lab of Wireless Wideband Communication & Signal Processing (12205)
文摘The problem of two order statistics detection schemes for the detection of a spatially distributed target in white Gaussian noise are studied.When the number of strong scattering cells is known,we first show an optimal detector,which requires many processing channels.The structure of such optimal detector is complex.Therefore,a simpler quasi-optimal detector is then introduced.The quasi-optimal detector,called the strong scattering cells’ number dependent order statistics(SND-OS) detector,takes the form of an average of maximum strong scattering cells with a known number.If the number of strong scattering cells is unknown in real situation,the multi-channel order statistics(MC-OS) detector is used.In each channel,a various number of maximums scattered from target are averaged.Then,the false alarm probability analysis and thresholds sets for each channel are given,following the detection results presented by means of Monte Carlo simulation strategy based on simulated target model and three measured targets.In particular,the theoretical analysis and simulation results highlight that the MC-OS detector can efficiently detect range-spread targets in white Gaussian noise.
基金the National High-Tech Research and Development Program (2002AA123032)the Innovative Research Team Program of UESTC, China.
文摘The novel closed-form expressions for the average channel capacity of dual selection diversity is presented, as well as, the bit-error rate (BER) of several coherent and noncoherent digital modulation schemes in the correlated Weibull fading channels with nonidentical statisticS. The results are expressed in terms of Meijer's Gfunction, which can be easily evaluated numerically. The simulation results are presented to validate the proposed theoretical analysis and to examine the effects of the fading severity on the concerned quantities.
文摘CFAR technique is widely used in radar targets detection fields. Traditional algorithm is cell averaging (CA), which can give a good detection performance in a relatively ideal environment. Recently, censoring technique is adopted to make the detector perform robustly. Ordered statistic (OS) and trimmed mean (TM) methods are proposed. TM methods treat the reference samples which participate in clutter power estimates equally, but this processing will not realize the effective estimates of clutter power. Therefore, in this paper a quasi best weighted (QBW) order statistics algorithm is presented. In special cases, QBW reduces to CA and the censored mean level detector (CMLD).
基金This project was supported by the National 863 project (2001AA422420 -02)
文摘An on-line blind source separation (BSS) algorithm is presented in this paper under the assumption that sources are temporarily correlated signals. By using only some of the observed samples in a recursive calculation, the whitening matrix and the rotation matrix could be approximately obtained through the measurement of only one cost function. SimNations show goad performance of the algorithm.
文摘We propose an information theory based objective function for measuring the statistics independent of source signals. Then, we develop a learlling algorithm for blind separation of nonstationary signals by minimizing the objective function, in which the property of nonstationary and direct architecture neural network is applied. The analysis demonstrates the equiralence of two neural architectures in some special cases. The computer simulation shows the validity of the proposed algorithm. We give the performance surface of the object function at the last of the paper.
文摘A single-channel speech enhancement method of noisy speech signals at very low signal-to-noise ratios is presented, which is based on masking properties of the human auditory system and power spectral density estimation of non stationary noise. It allows for an automatic adaptation in time and frequency of the parametric enhancement system, and finds the best tradeoff among the amount of noise reduction, the speech distortion, and the level of musical residual noise based on a criterion correlated with perception and SNR. This leads to a significant reduction of the unnatural structure of the residual noise. The results with several noise types show that the enhanced speech is more pleasant to a human listener.
基金supported by the National Natural Science Foundation of China(6137213661372134+2 种基金61172137)the Fundamental Research Funds for the Central Universities(K5051202005)the China Scholarship Council(CSC)
文摘For radar targets flying at low altitude, multiple pathways produce fade or enhancement relative to the level that would be expected in a free-space environment. In this paper, a new detec- tion method based on a wide-ranging multi-frequency radar for low angle targets is proposed. Sequential transmitting multiple pulses with different frequencies are first applied to decorrelate the cohe- rence of the direct and reflected echoes. After receiving all echoes, the multi-frequency samples are arranged in a sort descending ac- cording to the amplitude. Some high amplitude echoes in the same range cell are accumulated to improve the signal-to-noise ratio and the optimal number of high amplitude echoes is analyzed and given by experiments. Finally, simulation results are presented to verify the effectiveness of the method.
基金supported by the National Natural Science Foundation of China(12072090).
文摘This paper concentrates on addressing the hypersonic glide vehicle(HGV)tracking problem considering the high maneuverability and non-stationary heavy-tailed measurement noise without prior statistics in complicated flight environments.Since the interacting multiple model(IMM)filtering is famous with its ability to cover the movement property of motion models,the problem is formulated as modeling the non-stationary heavy-tailed measurement noise without any prior statistics in the IMM framework.Firstly,without any prior statistics,the Gaussian-inverse Wishart distribution is embedded in the improved Pearson type-VII(PTV)distribution,which can adaptively adjust the parameters to model the non-stationary heavytailed measurement noise.Besides,degree of freedom(DOF)parameters are surrogated by the maximization of evidence lower bound(ELBO)in the variational Bayesian optimization framework instead of fixed value to handle uncertain non-Gaussian degrees.Then,this paper analytically derives fusion forms based on the maximum Versoria fusion criterion instead of the moment matching approach,which can provide a precise approximation for the PTV mixture distribution in the mixing and output steps combined with the weight Kullback-Leibler average theory.Simulation results demonstrate the superiority and robustness of the proposed algorithm in typical HGVs tracking when the measurement noise without priori statistics is non-stationary.
基金Supported by the National Natural Science Foundation of China(12061017,12361055)the Research Fund of Guangxi Key Lab of Multi-source Information Mining&Security(22-A-01-01)。
文摘Existing blockwise empirical likelihood(BEL)method blocks the observations or their analogues,which is proven useful under some dependent data settings.In this paper,we introduce a new BEL(NBEL)method by blocking the scoring functions under high dimensional cases.We study the construction of confidence regions for the parameters in spatial autoregressive models with spatial autoregressive disturbances(SARAR models)with high dimension of parameters by using the NBEL method.It is shown that the NBEL ratio statistics are asymptoticallyχ^(2)-type distributed,which are used to obtain the NBEL based confidence regions for the parameters in SARAR models.A simulation study is conducted to compare the performances of the NBEL and the usual EL methods.
文摘Risk management often plays an important role in decision making un-der uncertainty.In quantitative risk management,assessing and optimizing risk metrics requires eficient computing techniques and reliable theoretical guarantees.In this pa-per,we introduce several topics on quantitative risk management and review some of the recent studies and advancements on the topics.We consider several risk metrics and study decision models that involve the metrics,with a main focus on the related com-puting techniques and theoretical properties.We show that stochastic optimization,as a powerful tool,can be leveraged to effectively address these problems.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region(Grant No.2022D01B 187)。
文摘Federated learning(FL)is a distributed machine learning paradigm for edge cloud computing.FL can facilitate data-driven decision-making in tactical scenarios,effectively addressing both data volume and infrastructure challenges in edge environments.However,the diversity of clients in edge cloud computing presents significant challenges for FL.Personalized federated learning(pFL)received considerable attention in recent years.One example of pFL involves exploiting the global and local information in the local model.Current pFL algorithms experience limitations such as slow convergence speed,catastrophic forgetting,and poor performance in complex tasks,which still have significant shortcomings compared to the centralized learning.To achieve high pFL performance,we propose FedCLCC:Federated Contrastive Learning and Conditional Computing.The core of FedCLCC is the use of contrastive learning and conditional computing.Contrastive learning determines the feature representation similarity to adjust the local model.Conditional computing separates the global and local information and feeds it to their corresponding heads for global and local handling.Our comprehensive experiments demonstrate that FedCLCC outperforms other state-of-the-art FL algorithms.
基金funded through India Meteorological Department,New Delhi,India under the Forecasting Agricultural output using Space,Agrometeorol ogy and Land based observations(FASAL)project and fund number:No.ASC/FASAL/KT-11/01/HQ-2010.
文摘Background Cotton is one of the most important commercial crops after food crops,especially in countries like India,where it’s grown extensively under rainfed conditions.Because of its usage in multiple industries,such as textile,medicine,and automobile industries,it has greater commercial importance.The crop’s performance is greatly influenced by prevailing weather dynamics.As climate changes,assessing how weather changes affect crop performance is essential.Among various techniques that are available,crop models are the most effective and widely used tools for predicting yields.Results This study compares statistical and machine learning models to assess their ability to predict cotton yield across major producing districts of Karnataka,India,utilizing a long-term dataset spanning from 1990 to 2023 that includes yield and weather factors.The artificial neural networks(ANNs)performed superiorly with acceptable yield deviations ranging within±10%during both vegetative stage(F1)and mid stage(F2)for cotton.The model evaluation metrics such as root mean square error(RMSE),normalized root mean square error(nRMSE),and modelling efficiency(EF)were also within the acceptance limits in most districts.Furthermore,the tested ANN model was used to assess the importance of the dominant weather factors influencing crop yield in each district.Specifically,the use of morning relative humidity as an individual parameter and its interaction with maximum and minimum tempera-ture had a major influence on cotton yield in most of the yield predicted districts.These differences highlighted the differential interactions of weather factors in each district for cotton yield formation,highlighting individual response of each weather factor under different soils and management conditions over the major cotton growing districts of Karnataka.Conclusions Compared with statistical models,machine learning models such as ANNs proved higher efficiency in forecasting the cotton yield due to their ability to consider the interactive effects of weather factors on yield forma-tion at different growth stages.This highlights the best suitability of ANNs for yield forecasting in rainfed conditions and for the study on relative impacts of weather factors on yield.Thus,the study aims to provide valuable insights to support stakeholders in planning effective crop management strategies and formulating relevant policies.
基金Project(52274096)supported by the National Natural Science Foundation of ChinaProject(WS2023A03)supported by the State Key Laboratory Cultivation Base for Gas Geology and Gas Control,China。
文摘Accurate assessment of coal brittleness is crucial in the design of coal seam drilling and underground coal mining operations.This study proposes a method for evaluating the brittleness of gas-bearing coal based on a statistical damage constitutive model and energy evolution mechanisms.Initially,integrating the principle of effective stress and the Hoek-Brown criterion,a statistical damage constitutive model for gas-bearing coal is established and validated through triaxial compression tests under different gas pressures to verify its accuracy and applicability.Subsequently,employing energy evolution mechanism,two energy characteristic parameters(elastic energy proportion and dissipated energy proportion)are analyzed.Based on the damage stress thresholds,the damage evolution characteristics of gas bearing coal were explored.Finally,by integrating energy characteristic parameters with damage parameters,a novel brittleness index is proposed.The results demonstrate that the theoretical curves derived from the statistical damage constitutive model closely align with the test curves,accurately reflecting the stress−strain characteristics of gas-bearing coal and revealing the stress drop and softening characteristics of coal in the post-peak stage.The shape parameter and scale parameter represent the brittleness and macroscopic strength of the coal,respectively.As gas pressure increases from 1 to 5 MPa,the shape parameter and the scale parameter decrease by 22.18%and 60.45%,respectively,indicating a reduction in both brittleness and strength of the coal.Parameters such as maximum damage rate and peak elastic energy storage limit positively correlate with coal brittleness.The brittleness index effectively captures the brittleness characteristics and reveals a decrease in brittleness and an increase in sensitivity to plastic deformation under higher gas pressure conditions.
基金Project(11272119)supported by the National Natural Science Foundation of China。
文摘This paper developed a statistical damage constitutive model for deep rock by considering the effects of external load and thermal treatment temperature based on the distortion energy.The model parameters were determined through the extremum features of stress−strain curve.Subsequently,the model predictions were compared with experimental results of marble samples.It is found that when the treatment temperature rises,the coupling damage evolution curve shows an S-shape and the slope of ascending branch gradually decreases during the coupling damage evolution process.At a constant temperature,confining pressure can suppress the expansion of micro-fractures.As the confining pressure increases the rock exhibits ductility characteristics,and the shape of coupling damage curve changes from an S-shape into a quasi-parabolic shape.This model can well characterize the influence of high temperature on the mechanical properties of deep rock and its brittleness-ductility transition characteristics under confining pressure.Also,it is suitable for sandstone and granite,especially in predicting the pre-peak stage and peak stress of stress−strain curve under the coupling action of confining pressure and high temperature.The relevant results can provide a reference for further research on the constitutive relationship of rock-like materials and their engineering applications.