Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices...Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.展开更多
当前信息抽取任务主要依赖大语言模型(LLM),而标书信息中广泛存在领域术语,模型缺乏相关先验知识,导致微调效率低且抽取性能不佳。此外,模型的抽取和泛化性能在很大程度上依赖于提示信息的质量和提示模板的构建方式。针对上述问题,提出...当前信息抽取任务主要依赖大语言模型(LLM),而标书信息中广泛存在领域术语,模型缺乏相关先验知识,导致微调效率低且抽取性能不佳。此外,模型的抽取和泛化性能在很大程度上依赖于提示信息的质量和提示模板的构建方式。针对上述问题,提出一种基于提示学习的标书信息抽取方法(TIEPL)。首先,利用生成式信息抽取的提示学习方法对LLM注入领域知识,以实现预训练和微调阶段的统一优化;其次,以LoRA(Low-Rank Adaption)微调方法为框架,单独设计提示训练旁路,并设计标书场景关键词提示模板,从而增强模型信息抽取与提示的双向关联。在自建的招中标数据集上的实验结果表明,相较于次优的UIE(Universal Information Extraction)方法,TIEPL的ROUGE-L(Recall-Oriented Understudy for Gisting Evaluation)和BLEU-4(BiLingual Evaluation Understudy)分别提高1.05和4.71个百分点,能更准确和完整地生成抽取结果,验证了所提方法在提高标书信息抽取准确性和泛化性方面的有效性。展开更多
现有的线性RankSVM已得到较有效的研究,但在训练大规模的线性Rank SVM时,过长的训练时间依然难以让人接受。通过对当前最先进算法Tree-TRON的分析可知,利用信任区域的牛顿迭代(trust region Newton method,TRON)去训练线性Rank SVM模型...现有的线性RankSVM已得到较有效的研究,但在训练大规模的线性Rank SVM时,过长的训练时间依然难以让人接受。通过对当前最先进算法Tree-TRON的分析可知,利用信任区域的牛顿迭代(trust region Newton method,TRON)去训练线性Rank SVM模型涉及大量的Hessian-vector内积(Hessian-vector product)计算,同时完成Hessian-vector内积计算又需计算大量的辅助变量和矩阵运算。为了有效地加速与Hessian-vector内积有关的计算,在多核系统下提出了一种高效的并行算法(命名为PRank SVM)用于提高大规模线性Rank SVM的训练速度。PRank SVM的特征主要体现为两个方面:训练数据按不同的查询划分为不同的子问题;在多核系统下,利用多核加速辅助变量和相关矩阵的计算。通过实验分析可知,相较于现有的算法(如Tree-TRON),PRank SVM不仅可以有效地提高训练速度,而且可以有效地确保预测的准确率。展开更多
基金supported by the National Natural Science Foundation of China(62171088,U19A2052,62020106011)the Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China(ZYGX2021YGLH215,ZYGX2022YGRH005)。
文摘Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.
文摘当前信息抽取任务主要依赖大语言模型(LLM),而标书信息中广泛存在领域术语,模型缺乏相关先验知识,导致微调效率低且抽取性能不佳。此外,模型的抽取和泛化性能在很大程度上依赖于提示信息的质量和提示模板的构建方式。针对上述问题,提出一种基于提示学习的标书信息抽取方法(TIEPL)。首先,利用生成式信息抽取的提示学习方法对LLM注入领域知识,以实现预训练和微调阶段的统一优化;其次,以LoRA(Low-Rank Adaption)微调方法为框架,单独设计提示训练旁路,并设计标书场景关键词提示模板,从而增强模型信息抽取与提示的双向关联。在自建的招中标数据集上的实验结果表明,相较于次优的UIE(Universal Information Extraction)方法,TIEPL的ROUGE-L(Recall-Oriented Understudy for Gisting Evaluation)和BLEU-4(BiLingual Evaluation Understudy)分别提高1.05和4.71个百分点,能更准确和完整地生成抽取结果,验证了所提方法在提高标书信息抽取准确性和泛化性方面的有效性。