摘要
在PCA基础上发展出的KPCA方法能抽取样本的非线性特征分量。然而,基于KPCA的特征抽取需计算所有训练样本与待抽取特征的样本间的核函数,因此,训练集的大小制约着特征抽取的效率。为了提高效率,假设特征空间中变换轴可由一部分训练样本(节点)线性表出,并设计了改进的KPCA算法(IKPCA)。该算法抽取某样本特征时,只需计算该样本与节点间的核函数即可。实验结果显示,IKPCA在对应较好性能的同时,具有明显的效率上的优势。
KPCA (kernel PCA) is derived from PCA. It can extract nonlinear feature components of samples. However, feature extraction for one sample requires that kernel functions between training samples and the sample be calculated in advance. So, the size of training sample set affects the efficiency of feature extraction. It is supposed that in feature space the eigenvectors may be linearly expressed by a part of training samples, called nodes. According to the supposition, an improved KPCA (IKPCA) algorithm is developed. IKPCA extracts feature components of one sample efficiently, only based on kernel functions between nodes and the sample. Experimental results show that IKPCA is very close to KPCA in performance, while with higher efficiency.
出处
《中国工程科学》
2005年第10期38-42,共5页
Strategic Study of CAE
基金
国家自然科学基金资助项目(60072034)
关键词
KPCA
IKPCA
特征抽取
特征空间
KPCA(Kernel PCA)
IKPCA(Improved KPCA)
feature extraction
feature space
作者简介
徐勇(1972-),男,四川简阳市人,南京理工大学计算机科学与技术系博士生