The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs ty...The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches.展开更多
The application of data envelopment analysis (DEA) as a multiple criteria decision making (MCDM) technique has been gaining more and more attention in recent research. In the practice of applying DEA approach, the...The application of data envelopment analysis (DEA) as a multiple criteria decision making (MCDM) technique has been gaining more and more attention in recent research. In the practice of applying DEA approach, the appearance of uncertainties on input and output data of decision making unit (DMU) might make the nominal solution infeasible and lead to the efficiency scores meaningless from practical view. This paper analyzes the impact of data uncertainty on the evaluation results of DEA, and proposes several robust DEA models based on the adaptation of recently developed robust optimization approaches, which would be immune against input and output data uncertainties. The robust DEA models developed are based on input-oriented and outputoriented CCR model, respectively, when the uncertainties appear in output data and input data separately. Furthermore, the robust DEA models could deal with random symmetric uncertainty and unknown-but-bounded uncertainty, in both of which the distributions of the random data entries are permitted to be unknown. The robust DEA models are implemented in a numerical example and the efficiency scores and rankings of these models are compared. The results indicate that the robust DEA approach could be a more reliable method for efficiency evaluation and ranking in MCDM problems.展开更多
Mendelian randomization(MR)is widely used in causal mediation analysis to control unmeasured confounding effects,which is valid under some strong assumptions.It is thus of great interest to assess the impact of violat...Mendelian randomization(MR)is widely used in causal mediation analysis to control unmeasured confounding effects,which is valid under some strong assumptions.It is thus of great interest to assess the impact of violations of these MR assumptions through sensitivity analysis.Sensitivity analyses have been conducted for simple MR-based causal average effect analyses,but they are not available for MR-based mediation analysis studies,and we aim to fill this gap in this paper.We propose to use two sensitivity parameters to quantify the effect due to the deviation of the IV assumptions.With these two sensitivity parameters,we derive consistent indirect causal effect estimators and establish their asymptotic propersties.Our theoretical results can be used in MR-based mediation analysis to study the impact of violations of MR as-sumptions.The finite sample performance of the proposed method is illustrated through simulation studies,sensitivity ana-lysis,and application to a real genome-wide association study.展开更多
The classic data envelopment analysis(DEA) model is used to evaluate decision-making units'(DMUs) efficiency under the assumption that all DMUs are evaluated with the same criteria setting. Recently, new research...The classic data envelopment analysis(DEA) model is used to evaluate decision-making units'(DMUs) efficiency under the assumption that all DMUs are evaluated with the same criteria setting. Recently, new researches begin to focus on the efficiency analysis of non-homogeneous DMU arose by real practices such as the evaluation of departments in a university, where departments argue for the adoption of different criteria based on their disciplinary characteristics. A DEA procedure is proposed in this paper to address the efficiency analysis of two non-homogeneous DMU groups. Firstly, an analytical framework is established to compromise diversified input and output(IO) criteria from two nonhomogenous groups. Then, a criteria fusion operation is designed to obtain different DEA analysis strategies. Meanwhile, Friedman test is introduced to analyze the consistency of all efficiency results produced by different strategies. Next, ordered weighted averaging(OWA) operators are applied to integrate different information to reach final conclusions. Finally, a numerical example is used to illustrate the proposed method. The result indicates that the proposed method relaxes the restriction of the classical DEA model,and can provide more analytical flexibility to address different decision analysis scenarios arose from practical applications.展开更多
Traditional data envelopment analysis(DEA) theory assumes that decision variables are regarded as inputs or outputs,and no variable can play the roles of both an input and an output at the same time.In fact,there ex...Traditional data envelopment analysis(DEA) theory assumes that decision variables are regarded as inputs or outputs,and no variable can play the roles of both an input and an output at the same time.In fact,there exist some variables that work as inputs and outputs simultaneously and are called dual-role variables.Traditional DEA models cannot be used to appraise the performance of decision making units containing dual-role variables.The paper analyzes the structure and properties of the production systems comprising dual-role variables,and proposes a DEA model integrating dual-role variables.Finally the proposed model is illustrated to evaluate the efficiency of university departments.展开更多
It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting ...It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting and amending anomalous samples was proposed. Taking full advantage of wavelet analysis’ properties of multi-resolution and local analysis, this approach is able to detect and amend anomalous samples effectively. To realize the rapid numeric computation of wavelet translation for a discrete sequence, a modified algorithm based on Newton-Cores formula was also proposed. The experimental result shows that the approach is feasible with good result and good practicality.展开更多
In this paper a new method of passive underwater TMA (target motion analysis) using data fusion is presented. The findings of this research are based on an understanding that there is a powerful sonar system that cons...In this paper a new method of passive underwater TMA (target motion analysis) using data fusion is presented. The findings of this research are based on an understanding that there is a powerful sonar system that consists of many types of sonar but with one own-ship, and that different target parameter measurements can be obtained simultaneously. For the analysis 3 data measurements, passive bearing, elevation and multipath time-delay, are used, which are divided into two groups: a group with estimates of two preliminary target parameter obtained by dealing with each group measurement independently, and a group where correlated estimates are sent to a fusion center where the correlation between two data groups are considered so that the passive underwater TMA is realized. Simulation results show that curves of parameter estimation errors obtained by using the data fusion have fast convergence and the estimation accuracy is noticeably improved. The TMA algorithm presented is verified and is of practical significance because it is easy to be realized in one ship.展开更多
The magnetic resonance spectroscopy(MRS)results are greatly influenced by reconstruction of the spectrum and quantitative analysis.Because of this requirement a number of programs dedicated to MRS data analysis were d...The magnetic resonance spectroscopy(MRS)results are greatly influenced by reconstruction of the spectrum and quantitative analysis.Because of this requirement a number of programs dedicated to MRS data analysis were developed.The selection and use of appropriate software is crucial not only in clinical procedures,but also while carrying out scientific research.The choice of the software to suit the user's needs should be based on the analysis of the functionality of the program.It is particularly important from the user's viewpoint to identify what data can be loaded and processed in the program.The specific programs allow the user different degree of control over analysis parameters.Moreover,the programs for MRS data analysis differ in terms of the applied signal processing algorithms.The aim of this work,therefore,is to review available packages designed for MRS data analysis,taking into account their capabilities and limitations.展开更多
In e-commerce the multidimensional data analysis based on the Web data needs integrating various data sources such as XML data and relational data on the conceptual level. A conceptual data description approach to mul...In e-commerce the multidimensional data analysis based on the Web data needs integrating various data sources such as XML data and relational data on the conceptual level. A conceptual data description approach to multidimensional data model the UML galaxy diagram is presented in order to conduct multidimensional data analysis for multiple subjects. The approach is illuminated using a case of 2_roots UML galaxy diagram that takes marketing analysis of TV products involved one retailer and several suppliers into consideration.展开更多
The paper studies the non-zero slacks in data envelopment analysis. A procedure is developed for the treatment of non-zero slacks. DEA projections can be done just in one step.
The present study focused on analyzing the technical efficiency office farms in southwest of Niger. The data from January to March 2015 survey of 148 ms in three districts of south-western of Niger were analyzed by us...The present study focused on analyzing the technical efficiency office farms in southwest of Niger. The data from January to March 2015 survey of 148 ms in three districts of south-western of Niger were analyzed by using DEA-Tobit two-step method. In the f'ust step, data envelopment analysis (DEA) was applied to estimate technical, pure technical and scale efficiency. In the second step, Tobit regression was used to identify factors affecting technical efficiency. The results showed that rice producers in southwest of Niger could reduce their inputs by 52% and still produce the same level of rice output. The Tobit regression showed that factors, such as farm size, experience in rice farming, membership of cooperative, main occupation and land ownership had a direct impact on technical efficiency.展开更多
Intercepted signal blind separation is a research topic with high importance for both military and civilian communication systems. A blind separation method for space-time block code (STBC) systems is proposed by us...Intercepted signal blind separation is a research topic with high importance for both military and civilian communication systems. A blind separation method for space-time block code (STBC) systems is proposed by using the ordinary independent component analysis (ICA). This method cannot work when specific complex modulations are employed since the assumption of mutual independence cannot be satisfied. The analysis shows that source signals, which are group-wise independent and use multi-dimensional ICA (MICA) instead of ordinary ICA, can be applied in this case. Utilizing the block-diagonal structure of the cumulant matrices, the JADE algorithm is generalized to the multidimensional case to separate the received data into mutually independent groups. Compared with ordinary ICA algorithms, the proposed method does not introduce additional ambiguities. Simulations show that the proposed method overcomes the drawback and achieves a better performance without utilizing coding information than channel estimation based algorithms.展开更多
Similarity measure design for discrete data group was proposed. Similarity measure design for continuous membership function was also carried out. Proposed similarity measures were designed based on fuzzy number and d...Similarity measure design for discrete data group was proposed. Similarity measure design for continuous membership function was also carried out. Proposed similarity measures were designed based on fuzzy number and distance measure, and were proved. To calculate the degree of similarity of discrete data, relative degree between data and total distribution was obtained. Discrete data similarity measure was completed with combination of mentioned relative degrees. Power interconnected system with multi characteristics was considered to apply discrete similarity measure. Naturally, similarity measure was extended to multi-dimensional similarity measure case, and applied to bus clustering problem.展开更多
Outlier detection is an important task in data mining. In fact, it is difficult to find the clustering centers in some sophisticated multidimensional datasets and to measure the deviation degree of each potential outl...Outlier detection is an important task in data mining. In fact, it is difficult to find the clustering centers in some sophisticated multidimensional datasets and to measure the deviation degree of each potential outlier. In this work, an effective outlier detection method based on multi-dimensional clustering and local density(ODBMCLD) is proposed. ODBMCLD firstly identifies the center objects by the local density peak of data objects, and clusters the whole dataset based on the center objects. Then, outlier objects belonging to different clusters will be marked as candidates of abnormal data. Finally, the top N points among these abnormal candidates are chosen as final anomaly objects with high outlier factors. The feasibility and effectiveness of the method are verified by experiments.展开更多
With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued no...With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued nonlinear problems arising in almost all real-world applications.This paper firstly presents two schemes of the complex Gaussian kernel-based adaptive filtering algorithms to illustrate their respective characteristics.Then the theoretical convergence behavior of the complex Gaussian kernel least mean square(LMS) algorithm is studied by using the fixed dictionary strategy.The simulation results demonstrate that the theoretical curves predicted by the derived analytical models consistently coincide with the Monte Carlo simulation results in both transient and steady-state stages for two introduced complex Gaussian kernel LMS algonthms using non-circular complex data.The analytical models are able to be regard as a theoretical tool evaluating ability and allow to compare with mean square error(MSE) performance among of complex kernel LMS(KLMS) methods according to the specified kernel bandwidth and the length of dictionary.展开更多
基金supported by the National Natural Science Foundation of China (70961005)211 Project for Postgraduate Student Program of Inner Mongolia University+1 种基金National Natural Science Foundation of Inner Mongolia (2010Zd342011MS1002)
文摘The conventional data envelopment analysis (DEA) measures the relative efficiencies of a set of decision making units with exact values of inputs and outputs. In real-world prob- lems, however, inputs and outputs typically have some levels of fuzziness. To analyze a decision making unit (DMU) with fuzzy input/output data, previous studies provided the fuzzy DEA model and proposed an associated evaluating approach. Nonetheless, numerous deficiencies must still be improved, including the α- cut approaches, types of fuzzy numbers, and ranking techniques. Moreover, a fuzzy sample DMU still cannot be evaluated for the Fuzzy DEA model. Therefore, this paper proposes a fuzzy DEA model based on sample decision making unit (FSDEA). Five eval- uation approaches and the related algorithm and ranking methods are provided to test the fuzzy sample DMU of the FSDEA model. A numerical experiment is used to demonstrate and compare the results with those obtained using alternative approaches.
文摘The application of data envelopment analysis (DEA) as a multiple criteria decision making (MCDM) technique has been gaining more and more attention in recent research. In the practice of applying DEA approach, the appearance of uncertainties on input and output data of decision making unit (DMU) might make the nominal solution infeasible and lead to the efficiency scores meaningless from practical view. This paper analyzes the impact of data uncertainty on the evaluation results of DEA, and proposes several robust DEA models based on the adaptation of recently developed robust optimization approaches, which would be immune against input and output data uncertainties. The robust DEA models developed are based on input-oriented and outputoriented CCR model, respectively, when the uncertainties appear in output data and input data separately. Furthermore, the robust DEA models could deal with random symmetric uncertainty and unknown-but-bounded uncertainty, in both of which the distributions of the random data entries are permitted to be unknown. The robust DEA models are implemented in a numerical example and the efficiency scores and rankings of these models are compared. The results indicate that the robust DEA approach could be a more reliable method for efficiency evaluation and ranking in MCDM problems.
基金This work was supported by the National Natural Science Foundation of China(12171451,72091212).
文摘Mendelian randomization(MR)is widely used in causal mediation analysis to control unmeasured confounding effects,which is valid under some strong assumptions.It is thus of great interest to assess the impact of violations of these MR assumptions through sensitivity analysis.Sensitivity analyses have been conducted for simple MR-based causal average effect analyses,but they are not available for MR-based mediation analysis studies,and we aim to fill this gap in this paper.We propose to use two sensitivity parameters to quantify the effect due to the deviation of the IV assumptions.With these two sensitivity parameters,we derive consistent indirect causal effect estimators and establish their asymptotic propersties.Our theoretical results can be used in MR-based mediation analysis to study the impact of violations of MR as-sumptions.The finite sample performance of the proposed method is illustrated through simulation studies,sensitivity ana-lysis,and application to a real genome-wide association study.
基金supported by the National Natural Science Foundation of China(71471087)
文摘The classic data envelopment analysis(DEA) model is used to evaluate decision-making units'(DMUs) efficiency under the assumption that all DMUs are evaluated with the same criteria setting. Recently, new researches begin to focus on the efficiency analysis of non-homogeneous DMU arose by real practices such as the evaluation of departments in a university, where departments argue for the adoption of different criteria based on their disciplinary characteristics. A DEA procedure is proposed in this paper to address the efficiency analysis of two non-homogeneous DMU groups. Firstly, an analytical framework is established to compromise diversified input and output(IO) criteria from two nonhomogenous groups. Then, a criteria fusion operation is designed to obtain different DEA analysis strategies. Meanwhile, Friedman test is introduced to analyze the consistency of all efficiency results produced by different strategies. Next, ordered weighted averaging(OWA) operators are applied to integrate different information to reach final conclusions. Finally, a numerical example is used to illustrate the proposed method. The result indicates that the proposed method relaxes the restriction of the classical DEA model,and can provide more analytical flexibility to address different decision analysis scenarios arose from practical applications.
基金supported by the National Natural Science Foundation of China (7082100170801056)
文摘Traditional data envelopment analysis(DEA) theory assumes that decision variables are regarded as inputs or outputs,and no variable can play the roles of both an input and an output at the same time.In fact,there exist some variables that work as inputs and outputs simultaneously and are called dual-role variables.Traditional DEA models cannot be used to appraise the performance of decision making units containing dual-role variables.The paper analyzes the structure and properties of the production systems comprising dual-role variables,and proposes a DEA model integrating dual-role variables.Finally the proposed model is illustrated to evaluate the efficiency of university departments.
基金Project(50374079) supported by the National Natural Science Foundation of China
文摘It is difficult to detect the anomalies whose matching relationship among some data attributes is very different from others’ in a dataset. Aiming at this problem, an approach based on wavelet analysis for detecting and amending anomalous samples was proposed. Taking full advantage of wavelet analysis’ properties of multi-resolution and local analysis, this approach is able to detect and amend anomalous samples effectively. To realize the rapid numeric computation of wavelet translation for a discrete sequence, a modified algorithm based on Newton-Cores formula was also proposed. The experimental result shows that the approach is feasible with good result and good practicality.
文摘In this paper a new method of passive underwater TMA (target motion analysis) using data fusion is presented. The findings of this research are based on an understanding that there is a powerful sonar system that consists of many types of sonar but with one own-ship, and that different target parameter measurements can be obtained simultaneously. For the analysis 3 data measurements, passive bearing, elevation and multipath time-delay, are used, which are divided into two groups: a group with estimates of two preliminary target parameter obtained by dealing with each group measurement independently, and a group where correlated estimates are sent to a fusion center where the correlation between two data groups are considered so that the passive underwater TMA is realized. Simulation results show that curves of parameter estimation errors obtained by using the data fusion have fast convergence and the estimation accuracy is noticeably improved. The TMA algorithm presented is verified and is of practical significance because it is easy to be realized in one ship.
文摘The magnetic resonance spectroscopy(MRS)results are greatly influenced by reconstruction of the spectrum and quantitative analysis.Because of this requirement a number of programs dedicated to MRS data analysis were developed.The selection and use of appropriate software is crucial not only in clinical procedures,but also while carrying out scientific research.The choice of the software to suit the user's needs should be based on the analysis of the functionality of the program.It is particularly important from the user's viewpoint to identify what data can be loaded and processed in the program.The specific programs allow the user different degree of control over analysis parameters.Moreover,the programs for MRS data analysis differ in terms of the applied signal processing algorithms.The aim of this work,therefore,is to review available packages designed for MRS data analysis,taking into account their capabilities and limitations.
基金This project was supported by China Postdoctoral Science Foundation (2005037506) and the National Natural ScienceFoundation of China (70472029)
文摘In e-commerce the multidimensional data analysis based on the Web data needs integrating various data sources such as XML data and relational data on the conceptual level. A conceptual data description approach to multidimensional data model the UML galaxy diagram is presented in order to conduct multidimensional data analysis for multiple subjects. The approach is illuminated using a case of 2_roots UML galaxy diagram that takes marketing analysis of TV products involved one retailer and several suppliers into consideration.
文摘The paper studies the non-zero slacks in data envelopment analysis. A procedure is developed for the treatment of non-zero slacks. DEA projections can be done just in one step.
文摘The present study focused on analyzing the technical efficiency office farms in southwest of Niger. The data from January to March 2015 survey of 148 ms in three districts of south-western of Niger were analyzed by using DEA-Tobit two-step method. In the f'ust step, data envelopment analysis (DEA) was applied to estimate technical, pure technical and scale efficiency. In the second step, Tobit regression was used to identify factors affecting technical efficiency. The results showed that rice producers in southwest of Niger could reduce their inputs by 52% and still produce the same level of rice output. The Tobit regression showed that factors, such as farm size, experience in rice farming, membership of cooperative, main occupation and land ownership had a direct impact on technical efficiency.
基金supported by the National Natural Science Foundation of China (61201282)
文摘Intercepted signal blind separation is a research topic with high importance for both military and civilian communication systems. A blind separation method for space-time block code (STBC) systems is proposed by using the ordinary independent component analysis (ICA). This method cannot work when specific complex modulations are employed since the assumption of mutual independence cannot be satisfied. The analysis shows that source signals, which are group-wise independent and use multi-dimensional ICA (MICA) instead of ordinary ICA, can be applied in this case. Utilizing the block-diagonal structure of the cumulant matrices, the JADE algorithm is generalized to the multidimensional case to separate the received data into mutually independent groups. Compared with ordinary ICA algorithms, the proposed method does not introduce additional ambiguities. Simulations show that the proposed method overcomes the drawback and achieves a better performance without utilizing coding information than channel estimation based algorithms.
基金Project(2010-0020163) supported by Key Research Institute Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology, Korea
文摘Similarity measure design for discrete data group was proposed. Similarity measure design for continuous membership function was also carried out. Proposed similarity measures were designed based on fuzzy number and distance measure, and were proved. To calculate the degree of similarity of discrete data, relative degree between data and total distribution was obtained. Discrete data similarity measure was completed with combination of mentioned relative degrees. Power interconnected system with multi characteristics was considered to apply discrete similarity measure. Naturally, similarity measure was extended to multi-dimensional similarity measure case, and applied to bus clustering problem.
基金Project(61362021)supported by the National Natural Science Foundation of ChinaProject(2016GXNSFAA380149)supported by Natural Science Foundation of Guangxi Province,China+1 种基金Projects(2016YJCXB02,2017YJCX34)supported by Innovation Project of GUET Graduate Education,ChinaProject(2011KF11)supported by the Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education,China
文摘Outlier detection is an important task in data mining. In fact, it is difficult to find the clustering centers in some sophisticated multidimensional datasets and to measure the deviation degree of each potential outlier. In this work, an effective outlier detection method based on multi-dimensional clustering and local density(ODBMCLD) is proposed. ODBMCLD firstly identifies the center objects by the local density peak of data objects, and clusters the whole dataset based on the center objects. Then, outlier objects belonging to different clusters will be marked as candidates of abnormal data. Finally, the top N points among these abnormal candidates are chosen as final anomaly objects with high outlier factors. The feasibility and effectiveness of the method are verified by experiments.
基金supported by the National Natural Science Foundation of China(6100115361271415+4 种基金6140149961531015)the Fundamental Research Funds for the Central Universities(3102014JCQ010103102014ZD0041)the Opening Research Foundation of State Key Laboratory of Underwater Information Processing and Control(9140C231002130C23085)
文摘With the vigorous expansion of nonlinear adaptive filtering with real-valued kernel functions,its counterpart complex kernel adaptive filtering algorithms were also sequentially proposed to solve the complex-valued nonlinear problems arising in almost all real-world applications.This paper firstly presents two schemes of the complex Gaussian kernel-based adaptive filtering algorithms to illustrate their respective characteristics.Then the theoretical convergence behavior of the complex Gaussian kernel least mean square(LMS) algorithm is studied by using the fixed dictionary strategy.The simulation results demonstrate that the theoretical curves predicted by the derived analytical models consistently coincide with the Monte Carlo simulation results in both transient and steady-state stages for two introduced complex Gaussian kernel LMS algonthms using non-circular complex data.The analytical models are able to be regard as a theoretical tool evaluating ability and allow to compare with mean square error(MSE) performance among of complex kernel LMS(KLMS) methods according to the specified kernel bandwidth and the length of dictionary.