Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices...Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.展开更多
大语言模型(LLMs,Large Language Models)具有极强的自然语言理解和复杂问题求解能力,本文基于大语言模型构建了矿物问答系统,以高效地获取矿物知识。该系统首先从互联网资源获取矿物数据,清洗后将矿物数据结构化为矿物文档和问答对;将...大语言模型(LLMs,Large Language Models)具有极强的自然语言理解和复杂问题求解能力,本文基于大语言模型构建了矿物问答系统,以高效地获取矿物知识。该系统首先从互联网资源获取矿物数据,清洗后将矿物数据结构化为矿物文档和问答对;将矿物文档经过格式转换和建立索引后转化为矿物知识库,用于检索增强大语言模型生成,问答对用于微调大语言模型。使用矿物知识库检索增强大语言模型生成时,采用先召回再精排的两级检索模式,以获得更好的大语言模型生成结果。矿物大语言模型微调采用了主流的低秩适配(Low-Rank Adaption,LoRA)方法,以较少的训练参数获得了与全参微调性能相当的效果,节省了计算资源。实验结果表明,基于检索增强生成的大语言模型的矿物问答系统能以较高的准确率快捷地获取矿物知识。展开更多
基金supported by the National Natural Science Foundation of China(62171088,U19A2052,62020106011)the Medico-Engineering Cooperation Funds from University of Electronic Science and Technology of China(ZYGX2021YGLH215,ZYGX2022YGRH005)。
文摘Deep neural networks(DNNs)have achieved great success in many data processing applications.However,high computational complexity and storage cost make deep learning difficult to be used on resource-constrained devices,and it is not environmental-friendly with much power cost.In this paper,we focus on low-rank optimization for efficient deep learning techniques.In the space domain,DNNs are compressed by low rank approximation of the network parameters,which directly reduces the storage requirement with a smaller number of network parameters.In the time domain,the network parameters can be trained in a few subspaces,which enables efficient training for fast convergence.The model compression in the spatial domain is summarized into three categories as pre-train,pre-set,and compression-aware methods,respectively.With a series of integrable techniques discussed,such as sparse pruning,quantization,and entropy coding,we can ensemble them in an integration framework with lower computational complexity and storage.In addition to summary of recent technical advances,we have two findings for motivating future works.One is that the effective rank,derived from the Shannon entropy of the normalized singular values,outperforms other conventional sparse measures such as the?_1 norm for network compression.The other is a spatial and temporal balance for tensorized neural networks.For accelerating the training of tensorized neural networks,it is crucial to leverage redundancy for both model compression and subspace training.
文摘大语言模型(LLMs,Large Language Models)具有极强的自然语言理解和复杂问题求解能力,本文基于大语言模型构建了矿物问答系统,以高效地获取矿物知识。该系统首先从互联网资源获取矿物数据,清洗后将矿物数据结构化为矿物文档和问答对;将矿物文档经过格式转换和建立索引后转化为矿物知识库,用于检索增强大语言模型生成,问答对用于微调大语言模型。使用矿物知识库检索增强大语言模型生成时,采用先召回再精排的两级检索模式,以获得更好的大语言模型生成结果。矿物大语言模型微调采用了主流的低秩适配(Low-Rank Adaption,LoRA)方法,以较少的训练参数获得了与全参微调性能相当的效果,节省了计算资源。实验结果表明,基于检索增强生成的大语言模型的矿物问答系统能以较高的准确率快捷地获取矿物知识。