Purpose:This paper compares the paradigmatic differences between knowledge organization(KO)in library and information science and knowledge representation(KR)in AI to show the convergence in KO and KR methods and appl...Purpose:This paper compares the paradigmatic differences between knowledge organization(KO)in library and information science and knowledge representation(KR)in AI to show the convergence in KO and KR methods and applications.Methodology:The literature review and comparative analysis of KO and KR paradigms is the primary method used in this paper.Findings:A key difference between KO and KR lays in the purpose of KO is to organize knowledge into certain structure for standardizing and/or normalizing the vocabulary of concepts and relations,while KR is problem-solving oriented.Differences between KO and KR are discussed based on the goal,methods,and functions.Research limitations:This is only a preliminary research with a case study as proof of concept.Practical implications:The paper articulates on the opportunities in applying KR and other AI methods and techniques to enhance the functions of KO.Originality/value:Ontologies and linked data as the evidence of the convergence of KO and KR paradigms provide theoretical and methodological support to innovate KO in the AI era.展开更多
The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resou...The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resources are virtualized and provided as services. Mass participation and online interaction with social annotations become usual in human daily life. People who own similar interests on the Internet may cluster naturally into scalable and boundless communities and collective intelligence will emerge. Human is taken as an intelligent computing factor, and uncertainty becomes a basic property in cloud computing. Virtualization, soft computing and granular computing will become essential features of cloud computing. Compared with the engineering technological problems of IaaS (Infrastructure as a service), PaaS (Platform as a Service) and SaaS (Software as a Service), collective intelligence and uncertain knowledge representation will be more important frontiers in cloud computing for researchers within the community of intelligence science.展开更多
A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The...A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The advantages of knowledge representation based on production rules and neural networks were integrated into this method. Just as production knowledge representation, this method has clear structure and specific parameters meaning. In addition, it has learning and parallel reasoning ability as neural networks knowledge representation does. The result of simulation shows that the learning algorithm can converge, and the parameters of weights, threshold value and certainty factor can reach the ideal level after training.展开更多
Purpose: The current development of patient safety reporting systems is criticized for loss of information and low data quality due to the lack of a uniformed domain knowledge base and text processing functionality. ...Purpose: The current development of patient safety reporting systems is criticized for loss of information and low data quality due to the lack of a uniformed domain knowledge base and text processing functionality. To improve patient safety reporting, the present paper suggests an ontological representation of patient safety knowledge. Design/methodology/approach: We propose a framework for constructing an ontological knowledge base of patient safety. The present paper describes our design, implementation,and evaluation of the ontology at its initial stage. Findings: We describe the design and initial outcomes of the ontology implementation. The evaluation results demonstrate the clinical validity of the ontology by a self-developed survey measurement. Research limitations: The proposed ontology was developed and evaluated using a small number of information sources. Presently, US data are used, but they are not essential for the ultimate structure of the ontology.Practical implications: The goal of improving patient safety can be aided through investigating patient safety reports and providing actionable knowledge to clinical practitioners.As such, constructing a domain specific ontology for patient safety reports serves as a cornerstone in information collection and text mining methods.Originality/value: The use of ontologies provides abstracted representation of semantic information and enables a wealth of applications in a reporting system. Therefore, constructing such a knowledge base is recognized as a high priority in health care.展开更多
The design scheme of an agricultural expert system based on longan and cauliflower planting techniques is presented. Using an object-oriented design and a combination of the techniques in multimedia, database, expert ...The design scheme of an agricultural expert system based on longan and cauliflower planting techniques is presented. Using an object-oriented design and a combination of the techniques in multimedia, database, expert system and artificial intelligence, an in-depth analysis and summary are made of the knowledge features of die agricultural multimedia expert system and data models involved. According to the practical problems in agricultural field, the architectures and functions of the system are designed, and some design ideas about the hybrid knowledge representation and fuzzy reasoning are proposed.展开更多
Text event mining,as an indispensable method of text mining processing,has attracted the extensive attention of researchers.A modeling method for knowledge graph of events based on mutual information among neighbor do...Text event mining,as an indispensable method of text mining processing,has attracted the extensive attention of researchers.A modeling method for knowledge graph of events based on mutual information among neighbor domains and sparse representation is proposed in this paper,i.e.UKGE-MS.Specifically,UKGE-MS can improve the existing text mining technology's ability of understanding and discovering high-dimensional unmarked information,and solves the problems of traditional unsupervised feature selection methods,which only focus on selecting features from a global perspective and ignoring the impact of local connection of samples.Firstly,considering the influence of local information of samples in feature correlation evaluation,a feature clustering algorithm based on average neighborhood mutual information is proposed,and the feature clusters with certain event correlation are obtained;Secondly,an unsupervised feature selection method based on the high-order correlation of multi-dimensional statistical data is designed by combining the dimension reduction advantage of local linear embedding algorithm and the feature selection ability of sparse representation,so as to enhance the generalization ability of the selected feature items.Finally,the events knowledge graph is constructed by means of sparse representation and l1 norm.Extensive experiments are carried out on five real datasets and synthetic datasets,and the UKGE-MS are compared with five corresponding algorithms.The experimental results show that UKGE-MS is better than the traditional method in event clustering and feature selection,and has some advantages over other methods in text event recognition and discovery.展开更多
Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly...Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.展开更多
知识蒸馏的核心思想是利用1个作为教师网络的大型模型来指导1个作为学生网络的小型模型,提升学生网络在图像分类任务上的性能.现有知识蒸馏方法通常从单一的输入样本中提取类别概率或特征信息作为知识,并没有对样本间关系进行建模,造成...知识蒸馏的核心思想是利用1个作为教师网络的大型模型来指导1个作为学生网络的小型模型,提升学生网络在图像分类任务上的性能.现有知识蒸馏方法通常从单一的输入样本中提取类别概率或特征信息作为知识,并没有对样本间关系进行建模,造成网络的表征学习能力下降.为解决此问题,本文引入图卷积神经网络,将输入样本集视为图结点构建关系图,图中的每个样本都可以聚合其他样本信息,提升样本的表征能力.本文从图结点和图关系2个角度构建图表征知识蒸馏误差,利用元学习引导学生网络自适应学习教师网络更佳的图表征,提升学生网络的图建模能力.相比于基线方法,本文提出的图表征知识蒸馏方法在加拿大高等研究院(Canadian Institute For Advanced Research,CIFAR)发布的100种分类数据集上提升了3.70%的分类准确率,表明本文方法引导学生网络学习到了更具有判别性的特征空间,提升了图像分类能力.展开更多
文摘Purpose:This paper compares the paradigmatic differences between knowledge organization(KO)in library and information science and knowledge representation(KR)in AI to show the convergence in KO and KR methods and applications.Methodology:The literature review and comparative analysis of KO and KR paradigms is the primary method used in this paper.Findings:A key difference between KO and KR lays in the purpose of KO is to organize knowledge into certain structure for standardizing and/or normalizing the vocabulary of concepts and relations,while KR is problem-solving oriented.Differences between KO and KR are discussed based on the goal,methods,and functions.Research limitations:This is only a preliminary research with a case study as proof of concept.Practical implications:The paper articulates on the opportunities in applying KR and other AI methods and techniques to enhance the functions of KO.Originality/value:Ontologies and linked data as the evidence of the convergence of KO and KR paradigms provide theoretical and methodological support to innovate KO in the AI era.
基金supported by National Key Basic Research Program of China (973 Program) under Grant No.2007CB310804China Post-doctoral Science Foundation under Grants No.20090460107, 201003794
文摘The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resources are virtualized and provided as services. Mass participation and online interaction with social annotations become usual in human daily life. People who own similar interests on the Internet may cluster naturally into scalable and boundless communities and collective intelligence will emerge. Human is taken as an intelligent computing factor, and uncertainty becomes a basic property in cloud computing. Virtualization, soft computing and granular computing will become essential features of cloud computing. Compared with the engineering technological problems of IaaS (Infrastructure as a service), PaaS (Platform as a Service) and SaaS (Software as a Service), collective intelligence and uncertain knowledge representation will be more important frontiers in cloud computing for researchers within the community of intelligence science.
文摘A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The advantages of knowledge representation based on production rules and neural networks were integrated into this method. Just as production knowledge representation, this method has clear structure and specific parameters meaning. In addition, it has learning and parallel reasoning ability as neural networks knowledge representation does. The result of simulation shows that the learning algorithm can converge, and the parameters of weights, threshold value and certainty factor can reach the ideal level after training.
基金supported by a grant from AHRQ, 1R01HS022895a patient safety grant from the University of Texas system, #156374
文摘Purpose: The current development of patient safety reporting systems is criticized for loss of information and low data quality due to the lack of a uniformed domain knowledge base and text processing functionality. To improve patient safety reporting, the present paper suggests an ontological representation of patient safety knowledge. Design/methodology/approach: We propose a framework for constructing an ontological knowledge base of patient safety. The present paper describes our design, implementation,and evaluation of the ontology at its initial stage. Findings: We describe the design and initial outcomes of the ontology implementation. The evaluation results demonstrate the clinical validity of the ontology by a self-developed survey measurement. Research limitations: The proposed ontology was developed and evaluated using a small number of information sources. Presently, US data are used, but they are not essential for the ultimate structure of the ontology.Practical implications: The goal of improving patient safety can be aided through investigating patient safety reports and providing actionable knowledge to clinical practitioners.As such, constructing a domain specific ontology for patient safety reports serves as a cornerstone in information collection and text mining methods.Originality/value: The use of ontologies provides abstracted representation of semantic information and enables a wealth of applications in a reporting system. Therefore, constructing such a knowledge base is recognized as a high priority in health care.
基金Supported by the National Natural Science Foundation of China (No. 700400D1).
文摘The design scheme of an agricultural expert system based on longan and cauliflower planting techniques is presented. Using an object-oriented design and a combination of the techniques in multimedia, database, expert system and artificial intelligence, an in-depth analysis and summary are made of the knowledge features of die agricultural multimedia expert system and data models involved. According to the practical problems in agricultural field, the architectures and functions of the system are designed, and some design ideas about the hybrid knowledge representation and fuzzy reasoning are proposed.
基金This study was funded by the International Science and Technology Cooperation Program of the Science and Technology Department of Shaanxi Province,China(No.2021KW-16)the Science and Technology Project in Xi’an(No.2019218114GXRC017CG018-GXYD17.11),Thesis work was supported by the special fund construction project of Key Disciplines in Ordinary Colleges and Universities in Shaanxi Province,the authors would like to thank the anonymous reviewers for their helpful comments and suggestions.
文摘Text event mining,as an indispensable method of text mining processing,has attracted the extensive attention of researchers.A modeling method for knowledge graph of events based on mutual information among neighbor domains and sparse representation is proposed in this paper,i.e.UKGE-MS.Specifically,UKGE-MS can improve the existing text mining technology's ability of understanding and discovering high-dimensional unmarked information,and solves the problems of traditional unsupervised feature selection methods,which only focus on selecting features from a global perspective and ignoring the impact of local connection of samples.Firstly,considering the influence of local information of samples in feature correlation evaluation,a feature clustering algorithm based on average neighborhood mutual information is proposed,and the feature clusters with certain event correlation are obtained;Secondly,an unsupervised feature selection method based on the high-order correlation of multi-dimensional statistical data is designed by combining the dimension reduction advantage of local linear embedding algorithm and the feature selection ability of sparse representation,so as to enhance the generalization ability of the selected feature items.Finally,the events knowledge graph is constructed by means of sparse representation and l1 norm.Extensive experiments are carried out on five real datasets and synthetic datasets,and the UKGE-MS are compared with five corresponding algorithms.The experimental results show that UKGE-MS is better than the traditional method in event clustering and feature selection,and has some advantages over other methods in text event recognition and discovery.
基金This work was co-funded by the European Research Council for the project ScienceGRAPH(Grant agreement ID:819536)by the TIB Leibniz Information Centre for Science and Technology.
文摘Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.
文摘知识蒸馏的核心思想是利用1个作为教师网络的大型模型来指导1个作为学生网络的小型模型,提升学生网络在图像分类任务上的性能.现有知识蒸馏方法通常从单一的输入样本中提取类别概率或特征信息作为知识,并没有对样本间关系进行建模,造成网络的表征学习能力下降.为解决此问题,本文引入图卷积神经网络,将输入样本集视为图结点构建关系图,图中的每个样本都可以聚合其他样本信息,提升样本的表征能力.本文从图结点和图关系2个角度构建图表征知识蒸馏误差,利用元学习引导学生网络自适应学习教师网络更佳的图表征,提升学生网络的图建模能力.相比于基线方法,本文提出的图表征知识蒸馏方法在加拿大高等研究院(Canadian Institute For Advanced Research,CIFAR)发布的100种分类数据集上提升了3.70%的分类准确率,表明本文方法引导学生网络学习到了更具有判别性的特征空间,提升了图像分类能力.