In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the ...In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the samplings of training data are deficient.This paper proposes a dimension-reduced approach to alleviate this problem.The dimension reduction includes two steps:firstly,the full array is divided into several subarrays;secondly,the test data and the training data at each subarray are transformed into the modal domain from the hydrophone domain.Then the modal-domain test data and training data at each subarray are processed to formulate the subarray statistic by using the GLRT theory.The final test statistic of the dimension-reduced ASD(DR-ASD)is obtained by summing all the subarray statistics.After the dimension reduction,the unknown parameters can be estimated more accurately so the DR-ASD achieves a better detection performance than the ASD.In order to achieve the optimal detection performance,the processing gain of the DR-ASD is deduced to choose a proper number of subarrays.Simulation experiments verify the improved detection performance of the DR-ASD compared with the ASD.展开更多
In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the researc...In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.展开更多
Multi-label data with high dimensionality often occurs,which will produce large time and energy overheads when directly used in classification tasks.To solve this problem,a novel algorithm called multi-label dimension...Multi-label data with high dimensionality often occurs,which will produce large time and energy overheads when directly used in classification tasks.To solve this problem,a novel algorithm called multi-label dimensionality reduction via semi-supervised discriminant analysis(MSDA) was proposed.It was expected to derive an objective discriminant function as smooth as possible on the data manifold by multi-label learning and semi-supervised learning.By virtue of the latent imformation,which was provided by the graph weighted matrix of sample attributes and the similarity correlation matrix of partial sample labels,MSDA readily made the separability between different classes achieve maximization and estimated the intrinsic geometric structure in the lower manifold space by employing unlabeled data.Extensive experimental results on several real multi-label datasets show that after dimensionality reduction using MSDA,the average classification accuracy is about 9.71% higher than that of other algorithms,and several evaluation metrices like Hamming-loss are also superior to those of other dimensionality reduction methods.展开更多
With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direc...With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.展开更多
The high cost and low efficiency of full-scale vehicle experiments and numerical simulations limit the efficient development of armored vehicle occupant protection systems.The floor-occupant-seat local simulation mode...The high cost and low efficiency of full-scale vehicle experiments and numerical simulations limit the efficient development of armored vehicle occupant protection systems.The floor-occupant-seat local simulation model provides an alternative solution for quickly evaluating the performance of occupant protection systems.However,the error and rationality of the loading of the thin-walled floor in the local model cannot be ignored.This study proposed an equivalent loading method for the local model,which includes two parts:the dimensionality reduction method for acceleration matrix and the joint optimization framework for equivalent node coordinates.In the dimensionality reduction method,the dimension of the acceleration matrix was reduced based on the improved kernel principal component analysis(KPCA),and a dynamic variable bandwidth was introduced to address the limitation of failing to effectively measure the similarity between acceleration data in conventional KPCA.In addition,a least squares problem with forced displacement constraints was constructed to solve the correction matrix,thereby achieving the scale restoration process of the principal component acceleration matrix.The joint optimization framework for coordinates consists of the error assessment of response time histories(EARTH)and Bayesian optimization.In this framework,the local loading error of the equivalent acceleration matrix is taken as the Bayesian optimization objective,which is quantified and scored by EARTH.The expected improvement acquisition function was used to select the new set of the equivalent acceleration node coordinates for the self-updating optimization of the observation dataset and Gaussian process surrogate model.We reduced the dimension of the acceleration matrix from 2256 to 7,while retaining 91%of the information features.The comprehensive error score of occupant's lower limb response in the local model increased from 58.5%to 80.4%.The proposed equivalent loading method provides a solution for the rapid and reliable development of occupant protection systems.展开更多
In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) ba...In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.展开更多
Driven by the challenge of integrating large amount of experimental data, classification technique emerges as one of the major and popular tools in computational biology and bioinformatics research. Machine learning m...Driven by the challenge of integrating large amount of experimental data, classification technique emerges as one of the major and popular tools in computational biology and bioinformatics research. Machine learning methods, especially kernel methods with Support Vector Machines (SVMs) are very popular and effective tools. In the perspective of kernel matrix, a technique namely Eigen- matrix translation has been introduced for protein data classification. The Eigen-matrix translation strategy has a lot of nice properties which deserve more exploration. This paper investigates the major role of Eigen-matrix translation in classification. The authors propose that its importance lies in the dimension reduction of predictor attributes within the data set. This is very important when the dimension of features is huge. The authors show by numerical experiments on real biological data sets that the proposed framework is crucial and effective in improving classification accuracy. This can therefore serve as a novel perspective for future research in dimension reduction problems.展开更多
Dimensionality reduction methods play an important role in face recognition. Principal component analysis(PCA) and two-dimensional principal component analysis(2DPCA) are two kinds of important methods in this field. ...Dimensionality reduction methods play an important role in face recognition. Principal component analysis(PCA) and two-dimensional principal component analysis(2DPCA) are two kinds of important methods in this field. Recent research seems like that 2DPCA method is superior to PCA method. To prove if this conclusion is always true, a comprehensive comparison study between PCA and 2DPCA methods was carried out. A novel concept, called column-image difference(CID), was proposed to analyze the difference between PCA and 2DPCA methods in theory. It is found that there exist some restrictive conditions when2 DPCA outperforms PCA. After theoretical analysis, the experiments were conducted on four famous face image databases. The experiment results confirm the validity of theoretical claim.展开更多
Multi-label classification problems arise frequently in text categorization, and many other related applications. Like conventional categorization problems, multi-label categorization tasks suffer from the curse of hi...Multi-label classification problems arise frequently in text categorization, and many other related applications. Like conventional categorization problems, multi-label categorization tasks suffer from the curse of high dimensionality. Existing multi-label dimensionality reduction methods mainly suffer from two limitations. First, latent nonlinear structures are not utilized in the input space. Second, the label information is not fully exploited. This paper proposes a new method, multi-label local discriminative embedding (MLDE), which exploits latent structures to minimize intraclass distances and maximize interclass distances on the basis of label correlations. The latent structures are extracted by constructing two sets of adjacency graphs to make use of nonlinear information. Non-symmetric label correlations, which are the case in real applications, are adopted. The problem is formulated into a global objective function and a linear mapping is achieved to solve out-of-sample problems. Empirical studies across 11 Yahoo sub-tasks, Enron and Bibtex are conducted to validate the superiority of MLDE to state-of-art multi-label dimensionality reduction methods.展开更多
The compressive sensing (CS) theory allows people to obtain signal in the frequency much lower than the requested one of sampling theorem. Because the theory is based on the assumption of that the location of sparse...The compressive sensing (CS) theory allows people to obtain signal in the frequency much lower than the requested one of sampling theorem. Because the theory is based on the assumption of that the location of sparse values is unknown, it has many constraints in practical applications. In fact, in many cases such as image processing, the location of sparse values is knowable, and CS can degrade to a linear process. In order to take full advantage of the visual information of images, this paper proposes the concept of dimensionality reduction transform matrix and then se- lects sparse values by constructing an accuracy control matrix, so on this basis, a degradation algorithm is designed that the signal can be obtained by the measurements as many as sparse values and reconstructed through a linear process. In comparison with similar methods, the degradation algorithm is effective in reducing the number of sensors and improving operational efficiency. The algorithm is also used to achieve the CS process with the same amount of data as joint photographic exports group (JPEG) compression and acquires the same display effect.展开更多
A novel supervised dimensionality reduction algorithm, named discriminant embedding by sparse representation and nonparametric discriminant analysis(DESN), was proposed for face recognition. Within the framework of DE...A novel supervised dimensionality reduction algorithm, named discriminant embedding by sparse representation and nonparametric discriminant analysis(DESN), was proposed for face recognition. Within the framework of DESN, the sparse local scatter and multi-class nonparametric between-class scatter were exploited for within-class compactness and between-class separability description, respectively. These descriptions, inspired by sparse representation theory and nonparametric technique, are more discriminative in dealing with complex-distributed data. Furthermore, DESN seeks for the optimal projection matrix by simultaneously maximizing the nonparametric between-class scatter and minimizing the sparse local scatter. The use of Fisher discriminant analysis further boosts the discriminating power of DESN. The proposed DESN was applied to data visualization and face recognition tasks, and was tested extensively on the Wine, ORL, Yale and Extended Yale B databases. Experimental results show that DESN is helpful to visualize the structure of high-dimensional data sets, and the average face recognition rate of DESN is about 9.4%, higher than that of other algorithms.展开更多
基金the National Natural Science Foundation of China (Grant No. 11534009, 11974285) to provide fund for conducting this research
文摘In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the samplings of training data are deficient.This paper proposes a dimension-reduced approach to alleviate this problem.The dimension reduction includes two steps:firstly,the full array is divided into several subarrays;secondly,the test data and the training data at each subarray are transformed into the modal domain from the hydrophone domain.Then the modal-domain test data and training data at each subarray are processed to formulate the subarray statistic by using the GLRT theory.The final test statistic of the dimension-reduced ASD(DR-ASD)is obtained by summing all the subarray statistics.After the dimension reduction,the unknown parameters can be estimated more accurately so the DR-ASD achieves a better detection performance than the ASD.In order to achieve the optimal detection performance,the processing gain of the DR-ASD is deduced to choose a proper number of subarrays.Simulation experiments verify the improved detection performance of the DR-ASD compared with the ASD.
基金supported by the National Natural Science Foundation of China(5110505261173163)the Liaoning Provincial Natural Science Foundation of China(201102037)
文摘In the need of some real applications, such as text categorization and image classification, the multi-label learning gradually becomes a hot research point in recent years. Much attention has been paid to the research of multi-label classification algorithms. Considering the fact that the high dimensionality of the multi-label datasets may cause the curse of dimensionality and wil hamper the classification process, a dimensionality reduction algorithm, named multi-label kernel discriminant analysis (MLKDA), is proposed to reduce the dimensionality of multi-label datasets. MLKDA, with the kernel trick, processes the multi-label integrally and realizes the nonlinear dimensionality reduction with the idea similar with linear discriminant analysis (LDA). In the classification process of multi-label data, the extreme learning machine (ELM) is an efficient algorithm in the premise of good accuracy. MLKDA, combined with ELM, shows a good performance in multi-label learning experiments with several datasets. The experiments on both static data and data stream show that MLKDA outperforms multi-label dimensionality reduction via dependence maximization (MDDM) and multi-label linear discriminant analysis (MLDA) in cases of balanced datasets and stronger correlation between tags, and ELM is also a good choice for multi-label classification.
基金Project(60425310) supported by the National Science Fund for Distinguished Young ScholarsProject(10JJ6094) supported by the Hunan Provincial Natural Foundation of China
文摘Multi-label data with high dimensionality often occurs,which will produce large time and energy overheads when directly used in classification tasks.To solve this problem,a novel algorithm called multi-label dimensionality reduction via semi-supervised discriminant analysis(MSDA) was proposed.It was expected to derive an objective discriminant function as smooth as possible on the data manifold by multi-label learning and semi-supervised learning.By virtue of the latent imformation,which was provided by the graph weighted matrix of sample attributes and the similarity correlation matrix of partial sample labels,MSDA readily made the separability between different classes achieve maximization and estimated the intrinsic geometric structure in the lower manifold space by employing unlabeled data.Extensive experimental results on several real multi-label datasets show that after dimensionality reduction using MSDA,the average classification accuracy is about 9.71% higher than that of other algorithms,and several evaluation metrices like Hamming-loss are also superior to those of other dimensionality reduction methods.
基金supported by the National Basic Research Program of China。
文摘With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.
基金supported by the National Natural Science Foundation of China(Grant Nos.52272437 and 52272370)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KYCX24_0635)。
文摘The high cost and low efficiency of full-scale vehicle experiments and numerical simulations limit the efficient development of armored vehicle occupant protection systems.The floor-occupant-seat local simulation model provides an alternative solution for quickly evaluating the performance of occupant protection systems.However,the error and rationality of the loading of the thin-walled floor in the local model cannot be ignored.This study proposed an equivalent loading method for the local model,which includes two parts:the dimensionality reduction method for acceleration matrix and the joint optimization framework for equivalent node coordinates.In the dimensionality reduction method,the dimension of the acceleration matrix was reduced based on the improved kernel principal component analysis(KPCA),and a dynamic variable bandwidth was introduced to address the limitation of failing to effectively measure the similarity between acceleration data in conventional KPCA.In addition,a least squares problem with forced displacement constraints was constructed to solve the correction matrix,thereby achieving the scale restoration process of the principal component acceleration matrix.The joint optimization framework for coordinates consists of the error assessment of response time histories(EARTH)and Bayesian optimization.In this framework,the local loading error of the equivalent acceleration matrix is taken as the Bayesian optimization objective,which is quantified and scored by EARTH.The expected improvement acquisition function was used to select the new set of the equivalent acceleration node coordinates for the self-updating optimization of the observation dataset and Gaussian process surrogate model.We reduced the dimension of the acceleration matrix from 2256 to 7,while retaining 91%of the information features.The comprehensive error score of occupant's lower limb response in the local model increased from 58.5%to 80.4%.The proposed equivalent loading method provides a solution for the rapid and reliable development of occupant protection systems.
基金supported by the National Natural Science Foundation of China (62271255,61871218)the Fundamental Research Funds for the Central University (3082019NC2019002)+1 种基金the Aeronautical Science Foundation (ASFC-201920007002)the Program of Remote Sensing Intelligent Monitoring and Emergency Services for Regional Security Elements。
文摘In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance.
基金supported by Research Grants Council of Hong Kong under Grant No.17301214HKU CERG Grants,Fundamental Research Funds for the Central Universities+2 种基金the Research Funds of Renmin University of ChinaHung Hing Ying Physical Research Grantthe Natural Science Foundation of China under Grant No.11271144
文摘Driven by the challenge of integrating large amount of experimental data, classification technique emerges as one of the major and popular tools in computational biology and bioinformatics research. Machine learning methods, especially kernel methods with Support Vector Machines (SVMs) are very popular and effective tools. In the perspective of kernel matrix, a technique namely Eigen- matrix translation has been introduced for protein data classification. The Eigen-matrix translation strategy has a lot of nice properties which deserve more exploration. This paper investigates the major role of Eigen-matrix translation in classification. The authors propose that its importance lies in the dimension reduction of predictor attributes within the data set. This is very important when the dimension of features is huge. The authors show by numerical experiments on real biological data sets that the proposed framework is crucial and effective in improving classification accuracy. This can therefore serve as a novel perspective for future research in dimension reduction problems.
基金Projects(50275150,61173052)supported by the National Natural Science Foundation of China
文摘Dimensionality reduction methods play an important role in face recognition. Principal component analysis(PCA) and two-dimensional principal component analysis(2DPCA) are two kinds of important methods in this field. Recent research seems like that 2DPCA method is superior to PCA method. To prove if this conclusion is always true, a comprehensive comparison study between PCA and 2DPCA methods was carried out. A novel concept, called column-image difference(CID), was proposed to analyze the difference between PCA and 2DPCA methods in theory. It is found that there exist some restrictive conditions when2 DPCA outperforms PCA. After theoretical analysis, the experiments were conducted on four famous face image databases. The experiment results confirm the validity of theoretical claim.
基金supported by the National Natural Science Foundation of China(61472305)the Science Research Program,Xi’an,China(2017073CG/RC036CXDKD003)the Aeronautical Science Foundation of China(20151981009)
文摘Multi-label classification problems arise frequently in text categorization, and many other related applications. Like conventional categorization problems, multi-label categorization tasks suffer from the curse of high dimensionality. Existing multi-label dimensionality reduction methods mainly suffer from two limitations. First, latent nonlinear structures are not utilized in the input space. Second, the label information is not fully exploited. This paper proposes a new method, multi-label local discriminative embedding (MLDE), which exploits latent structures to minimize intraclass distances and maximize interclass distances on the basis of label correlations. The latent structures are extracted by constructing two sets of adjacency graphs to make use of nonlinear information. Non-symmetric label correlations, which are the case in real applications, are adopted. The problem is formulated into a global objective function and a linear mapping is achieved to solve out-of-sample problems. Empirical studies across 11 Yahoo sub-tasks, Enron and Bibtex are conducted to validate the superiority of MLDE to state-of-art multi-label dimensionality reduction methods.
基金supported by the National Natural Science Foundation of China (61077079)the Specialized Research Fund for the Doctoral Program of Higher Education (20102304110013)the Program Ex-cellent Academic Leaders of Harbin (2009RFXXG034)
文摘The compressive sensing (CS) theory allows people to obtain signal in the frequency much lower than the requested one of sampling theorem. Because the theory is based on the assumption of that the location of sparse values is unknown, it has many constraints in practical applications. In fact, in many cases such as image processing, the location of sparse values is knowable, and CS can degrade to a linear process. In order to take full advantage of the visual information of images, this paper proposes the concept of dimensionality reduction transform matrix and then se- lects sparse values by constructing an accuracy control matrix, so on this basis, a degradation algorithm is designed that the signal can be obtained by the measurements as many as sparse values and reconstructed through a linear process. In comparison with similar methods, the degradation algorithm is effective in reducing the number of sensors and improving operational efficiency. The algorithm is also used to achieve the CS process with the same amount of data as joint photographic exports group (JPEG) compression and acquires the same display effect.
基金Project(40901216)supported by the National Natural Science Foundation of China
文摘A novel supervised dimensionality reduction algorithm, named discriminant embedding by sparse representation and nonparametric discriminant analysis(DESN), was proposed for face recognition. Within the framework of DESN, the sparse local scatter and multi-class nonparametric between-class scatter were exploited for within-class compactness and between-class separability description, respectively. These descriptions, inspired by sparse representation theory and nonparametric technique, are more discriminative in dealing with complex-distributed data. Furthermore, DESN seeks for the optimal projection matrix by simultaneously maximizing the nonparametric between-class scatter and minimizing the sparse local scatter. The use of Fisher discriminant analysis further boosts the discriminating power of DESN. The proposed DESN was applied to data visualization and face recognition tasks, and was tested extensively on the Wine, ORL, Yale and Extended Yale B databases. Experimental results show that DESN is helpful to visualize the structure of high-dimensional data sets, and the average face recognition rate of DESN is about 9.4%, higher than that of other algorithms.