This paper expresses the efficient outputs of decisionmaking unit(DMU) as the sum of "average outputs" forecasted by a GM(1,N) model and "increased outputs" which reflect the difficulty to realize efficient ou...This paper expresses the efficient outputs of decisionmaking unit(DMU) as the sum of "average outputs" forecasted by a GM(1,N) model and "increased outputs" which reflect the difficulty to realize efficient outputs.The increased outputs are solved by linear programming using data envelopment analysis efficiency theories,wherein a new sample is introduced whose inputs are equal to the budget in the issue No.n + 1 and outputs are forecasted by the GM(1,N) model.The shortcoming in the existing methods that the forecasted efficient outputs may be less than the possible actual outputs according to developing trends of input-output rate in the periods of pre-n is overcome.The new prediction method provides decision-makers with more decisionmaking information,and the initial conditions are easy to be given.展开更多
Volumetric efficiency and air charge estimation is one of the most demanding tasks in control of today's internal combustion engines.Specifically,using three-way catalytic converter involves strict control of the ...Volumetric efficiency and air charge estimation is one of the most demanding tasks in control of today's internal combustion engines.Specifically,using three-way catalytic converter involves strict control of the air/fuel ratio around the stoichiometric point and hence requires an accurate model for air charge estimation.However,high degrees of complexity and nonlinearity of the gas flow in the internal combustion engine make air charge estimation a challenging task.This is more obvious in engines with variable valve timing systems in which gas flow is more complex and depends on more functional variables.This results in models that are either quite empirical(such as look-up tables),not having interpretability and extrapolation capability,or physically based models which are not appropriate for onboard applications.Solving these problems,a novel semi-empirical model was proposed in this work which only needed engine speed,load,and valves timings for volumetric efficiency prediction.The accuracy and generalizability of the model is shown by its test on numerical and experimental data from three distinct engines.Normalized test errors are 0.0316,0.0152 and 0.24 for the three engines,respectively.Also the performance and complexity of the model were compared with neural networks as typical black box models.While the complexity of the model is less than half of the complexity of neural networks,and its computational cost is approximately 0.12 of that of neural networks and its prediction capability in the considered case studies is usually more.These results show the superiority of the proposed model over conventional black box models such as neural networks in terms of accuracy,generalizability and computational cost.展开更多
The present study focused on analyzing the technical efficiency office farms in southwest of Niger. The data from January to March 2015 survey of 148 ms in three districts of south-western of Niger were analyzed by us...The present study focused on analyzing the technical efficiency office farms in southwest of Niger. The data from January to March 2015 survey of 148 ms in three districts of south-western of Niger were analyzed by using DEA-Tobit two-step method. In the f'ust step, data envelopment analysis (DEA) was applied to estimate technical, pure technical and scale efficiency. In the second step, Tobit regression was used to identify factors affecting technical efficiency. The results showed that rice producers in southwest of Niger could reduce their inputs by 52% and still produce the same level of rice output. The Tobit regression showed that factors, such as farm size, experience in rice farming, membership of cooperative, main occupation and land ownership had a direct impact on technical efficiency.展开更多
Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices...Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.展开更多
基金supported by the Research Start Funds for Introducing High-level Talents of North China University of Water Resources and Electric Power
文摘This paper expresses the efficient outputs of decisionmaking unit(DMU) as the sum of "average outputs" forecasted by a GM(1,N) model and "increased outputs" which reflect the difficulty to realize efficient outputs.The increased outputs are solved by linear programming using data envelopment analysis efficiency theories,wherein a new sample is introduced whose inputs are equal to the budget in the issue No.n + 1 and outputs are forecasted by the GM(1,N) model.The shortcoming in the existing methods that the forecasted efficient outputs may be less than the possible actual outputs according to developing trends of input-output rate in the periods of pre-n is overcome.The new prediction method provides decision-makers with more decisionmaking information,and the initial conditions are easy to be given.
文摘Volumetric efficiency and air charge estimation is one of the most demanding tasks in control of today's internal combustion engines.Specifically,using three-way catalytic converter involves strict control of the air/fuel ratio around the stoichiometric point and hence requires an accurate model for air charge estimation.However,high degrees of complexity and nonlinearity of the gas flow in the internal combustion engine make air charge estimation a challenging task.This is more obvious in engines with variable valve timing systems in which gas flow is more complex and depends on more functional variables.This results in models that are either quite empirical(such as look-up tables),not having interpretability and extrapolation capability,or physically based models which are not appropriate for onboard applications.Solving these problems,a novel semi-empirical model was proposed in this work which only needed engine speed,load,and valves timings for volumetric efficiency prediction.The accuracy and generalizability of the model is shown by its test on numerical and experimental data from three distinct engines.Normalized test errors are 0.0316,0.0152 and 0.24 for the three engines,respectively.Also the performance and complexity of the model were compared with neural networks as typical black box models.While the complexity of the model is less than half of the complexity of neural networks,and its computational cost is approximately 0.12 of that of neural networks and its prediction capability in the considered case studies is usually more.These results show the superiority of the proposed model over conventional black box models such as neural networks in terms of accuracy,generalizability and computational cost.
文摘The present study focused on analyzing the technical efficiency office farms in southwest of Niger. The data from January to March 2015 survey of 148 ms in three districts of south-western of Niger were analyzed by using DEA-Tobit two-step method. In the f'ust step, data envelopment analysis (DEA) was applied to estimate technical, pure technical and scale efficiency. In the second step, Tobit regression was used to identify factors affecting technical efficiency. The results showed that rice producers in southwest of Niger could reduce their inputs by 52% and still produce the same level of rice output. The Tobit regression showed that factors, such as farm size, experience in rice farming, membership of cooperative, main occupation and land ownership had a direct impact on technical efficiency.
基金supported by the National Natural Science Foundation of China(62171088,U19A2052,62020106011)the Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China(ZYGX2021YGLH215,ZYGX2022YGRH005)。
文摘Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.