The design scheme of an agricultural expert system based on longan and cauliflower planting techniques is presented. Using an object-oriented design and a combination of the techniques in multimedia, database, expert ...The design scheme of an agricultural expert system based on longan and cauliflower planting techniques is presented. Using an object-oriented design and a combination of the techniques in multimedia, database, expert system and artificial intelligence, an in-depth analysis and summary are made of the knowledge features of die agricultural multimedia expert system and data models involved. According to the practical problems in agricultural field, the architectures and functions of the system are designed, and some design ideas about the hybrid knowledge representation and fuzzy reasoning are proposed.展开更多
Purpose:This paper compares the paradigmatic differences between knowledge organization(KO)in library and information science and knowledge representation(KR)in AI to show the convergence in KO and KR methods and appl...Purpose:This paper compares the paradigmatic differences between knowledge organization(KO)in library and information science and knowledge representation(KR)in AI to show the convergence in KO and KR methods and applications.Methodology:The literature review and comparative analysis of KO and KR paradigms is the primary method used in this paper.Findings:A key difference between KO and KR lays in the purpose of KO is to organize knowledge into certain structure for standardizing and/or normalizing the vocabulary of concepts and relations,while KR is problem-solving oriented.Differences between KO and KR are discussed based on the goal,methods,and functions.Research limitations:This is only a preliminary research with a case study as proof of concept.Practical implications:The paper articulates on the opportunities in applying KR and other AI methods and techniques to enhance the functions of KO.Originality/value:Ontologies and linked data as the evidence of the convergence of KO and KR paradigms provide theoretical and methodological support to innovate KO in the AI era.展开更多
The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resou...The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resources are virtualized and provided as services. Mass participation and online interaction with social annotations become usual in human daily life. People who own similar interests on the Internet may cluster naturally into scalable and boundless communities and collective intelligence will emerge. Human is taken as an intelligent computing factor, and uncertainty becomes a basic property in cloud computing. Virtualization, soft computing and granular computing will become essential features of cloud computing. Compared with the engineering technological problems of IaaS (Infrastructure as a service), PaaS (Platform as a Service) and SaaS (Software as a Service), collective intelligence and uncertain knowledge representation will be more important frontiers in cloud computing for researchers within the community of intelligence science.展开更多
A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The...A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The advantages of knowledge representation based on production rules and neural networks were integrated into this method. Just as production knowledge representation, this method has clear structure and specific parameters meaning. In addition, it has learning and parallel reasoning ability as neural networks knowledge representation does. The result of simulation shows that the learning algorithm can converge, and the parameters of weights, threshold value and certainty factor can reach the ideal level after training.展开更多
Purpose: The current development of patient safety reporting systems is criticized for loss of information and low data quality due to the lack of a uniformed domain knowledge base and text processing functionality. ...Purpose: The current development of patient safety reporting systems is criticized for loss of information and low data quality due to the lack of a uniformed domain knowledge base and text processing functionality. To improve patient safety reporting, the present paper suggests an ontological representation of patient safety knowledge. Design/methodology/approach: We propose a framework for constructing an ontological knowledge base of patient safety. The present paper describes our design, implementation,and evaluation of the ontology at its initial stage. Findings: We describe the design and initial outcomes of the ontology implementation. The evaluation results demonstrate the clinical validity of the ontology by a self-developed survey measurement. Research limitations: The proposed ontology was developed and evaluated using a small number of information sources. Presently, US data are used, but they are not essential for the ultimate structure of the ontology.Practical implications: The goal of improving patient safety can be aided through investigating patient safety reports and providing actionable knowledge to clinical practitioners.As such, constructing a domain specific ontology for patient safety reports serves as a cornerstone in information collection and text mining methods.Originality/value: The use of ontologies provides abstracted representation of semantic information and enables a wealth of applications in a reporting system. Therefore, constructing such a knowledge base is recognized as a high priority in health care.展开更多
Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly...Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.展开更多
It is well known that there exists a tight connection between nonmonotonic reasoning and conditional implication. Many researchers have investigated it from various angles. Among th em, C.Boutilier and P.Lamarre hav...It is well known that there exists a tight connection between nonmonotonic reasoning and conditional implication. Many researchers have investigated it from various angles. Among th em, C.Boutilier and P.Lamarre have shown that some conditional implication may b e regarded as the homology of different nonmonotonic consequence relations. In t his paper, based on the plausibility space introduced by Friedman and Halpern, w e characterize the condition logic in which conditional implication is nonmonoto nic, and this result characterizes the conditional implication which may be rega rded as the corresponding object in Meta language for nonmonotonic inference rel ations.展开更多
Taking into account that fuzzy ontology mapping has wide application and cannot be dealt with in many fields at present,a Chinese fuzzy ontology model and a method for Chinese fuzzy ontology mapping are proposed.The m...Taking into account that fuzzy ontology mapping has wide application and cannot be dealt with in many fields at present,a Chinese fuzzy ontology model and a method for Chinese fuzzy ontology mapping are proposed.The mapping discovery between two ontologies is achieved by computing the similarity between the concepts of two ontologies.Every concept consists of four features of concept name,property,instance and structure.First,the algorithms of calculating four individual similarities corresponding to the four features are given.Secondly,the similarity vectors consisting of four weighted individual similarities are built,and the weights are the linear function of harmony and reliability.The similarity vector is used to represent the similarity relation between two concepts which belong to different fuzzy ontolgoies.Lastly,Support Vector Machine(SVM) is used to get the mapping concept pairs by the similarity vectors.Experiment results are satisfactory.展开更多
Purpose: Big data offer a huge challenge. Their very existence leads to the contradiction that the more data we have the less accessible they become,as the particular piece of information one is searching for may be b...Purpose: Big data offer a huge challenge. Their very existence leads to the contradiction that the more data we have the less accessible they become,as the particular piece of information one is searching for may be buried among terabytes of other data. In this contribution we discuss the origin of big data and point to three challenges when big data arise: Data storage,data processing and generating insights.Design/methodology/approach: Computer-related challenges can be expressed by the CAP theorem which states that it is only possible to simultaneously provide any two of the three following properties in distributed applications: Consistency(C),availability(A) and partition tolerance(P). As an aside we mention Amdahl's law and its application for scientific collaboration. We further discuss data mining in large databases and knowledge representation for handling the results of data mining exercises. We further offer a short informetric study of the field of big data,and point to the ethical dimension of the big data phenomenon.Findings: There still are serious problems to overcome before the field of big data can deliver on its promises.Implications and limitations: This contribution offers a personal view,focusing on the information science aspects,but much more can be said about software aspects.Originality/value: We express the hope that the information scientists,including librarians,will be able to play their full role within the knowledge discovery,data mining and big data communities,leading to exciting developments,the reduction of scientific bottlenecks and really innovative applications.展开更多
At present, there has been an increasing interest in neuron-fuzzy systems, the combinations of artificial neural networks with fuzzy logic. In this paper, a definition of fuzzy finite state automata (FFA) is introdu...At present, there has been an increasing interest in neuron-fuzzy systems, the combinations of artificial neural networks with fuzzy logic. In this paper, a definition of fuzzy finite state automata (FFA) is introduced and fuzzy knowledge equivalence representations between neural networks, fuzzy systems and models of automata are discussed. Once the network has been trained, we develop a method to extract a representation of the FFA encoded in the recurrent neural network that recognizes the training rules.展开更多
基金Supported by the National Natural Science Foundation of China (No. 700400D1).
文摘The design scheme of an agricultural expert system based on longan and cauliflower planting techniques is presented. Using an object-oriented design and a combination of the techniques in multimedia, database, expert system and artificial intelligence, an in-depth analysis and summary are made of the knowledge features of die agricultural multimedia expert system and data models involved. According to the practical problems in agricultural field, the architectures and functions of the system are designed, and some design ideas about the hybrid knowledge representation and fuzzy reasoning are proposed.
文摘Purpose:This paper compares the paradigmatic differences between knowledge organization(KO)in library and information science and knowledge representation(KR)in AI to show the convergence in KO and KR methods and applications.Methodology:The literature review and comparative analysis of KO and KR paradigms is the primary method used in this paper.Findings:A key difference between KO and KR lays in the purpose of KO is to organize knowledge into certain structure for standardizing and/or normalizing the vocabulary of concepts and relations,while KR is problem-solving oriented.Differences between KO and KR are discussed based on the goal,methods,and functions.Research limitations:This is only a preliminary research with a case study as proof of concept.Practical implications:The paper articulates on the opportunities in applying KR and other AI methods and techniques to enhance the functions of KO.Originality/value:Ontologies and linked data as the evidence of the convergence of KO and KR paradigms provide theoretical and methodological support to innovate KO in the AI era.
基金supported by National Key Basic Research Program of China (973 Program) under Grant No.2007CB310804China Post-doctoral Science Foundation under Grants No.20090460107, 201003794
文摘The lasting evolution of computing environment, software engineering and interaction methods leads to cloud computing. Cloud computing changes the configuration mode of resources on the Internet and all kinds of resources are virtualized and provided as services. Mass participation and online interaction with social annotations become usual in human daily life. People who own similar interests on the Internet may cluster naturally into scalable and boundless communities and collective intelligence will emerge. Human is taken as an intelligent computing factor, and uncertainty becomes a basic property in cloud computing. Virtualization, soft computing and granular computing will become essential features of cloud computing. Compared with the engineering technological problems of IaaS (Infrastructure as a service), PaaS (Platform as a Service) and SaaS (Software as a Service), collective intelligence and uncertain knowledge representation will be more important frontiers in cloud computing for researchers within the community of intelligence science.
文摘A method of knowledge representation and learning based on fuzzy Petri nets was designed. In this way the parameters of weights, threshold value and certainty factor in knowledge model can be adjusted dynamically. The advantages of knowledge representation based on production rules and neural networks were integrated into this method. Just as production knowledge representation, this method has clear structure and specific parameters meaning. In addition, it has learning and parallel reasoning ability as neural networks knowledge representation does. The result of simulation shows that the learning algorithm can converge, and the parameters of weights, threshold value and certainty factor can reach the ideal level after training.
基金supported by a grant from AHRQ, 1R01HS022895a patient safety grant from the University of Texas system, #156374
文摘Purpose: The current development of patient safety reporting systems is criticized for loss of information and low data quality due to the lack of a uniformed domain knowledge base and text processing functionality. To improve patient safety reporting, the present paper suggests an ontological representation of patient safety knowledge. Design/methodology/approach: We propose a framework for constructing an ontological knowledge base of patient safety. The present paper describes our design, implementation,and evaluation of the ontology at its initial stage. Findings: We describe the design and initial outcomes of the ontology implementation. The evaluation results demonstrate the clinical validity of the ontology by a self-developed survey measurement. Research limitations: The proposed ontology was developed and evaluated using a small number of information sources. Presently, US data are used, but they are not essential for the ultimate structure of the ontology.Practical implications: The goal of improving patient safety can be aided through investigating patient safety reports and providing actionable knowledge to clinical practitioners.As such, constructing a domain specific ontology for patient safety reports serves as a cornerstone in information collection and text mining methods.Originality/value: The use of ontologies provides abstracted representation of semantic information and enables a wealth of applications in a reporting system. Therefore, constructing such a knowledge base is recognized as a high priority in health care.
基金This work was co-funded by the European Research Council for the project ScienceGRAPH(Grant agreement ID:819536)by the TIB Leibniz Information Centre for Science and Technology.
文摘Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.
文摘It is well known that there exists a tight connection between nonmonotonic reasoning and conditional implication. Many researchers have investigated it from various angles. Among th em, C.Boutilier and P.Lamarre have shown that some conditional implication may b e regarded as the homology of different nonmonotonic consequence relations. In t his paper, based on the plausibility space introduced by Friedman and Halpern, w e characterize the condition logic in which conditional implication is nonmonoto nic, and this result characterizes the conditional implication which may be rega rded as the corresponding object in Meta language for nonmonotonic inference rel ations.
基金supported by the Natural Science Foundation of Beijing City under Grant No.4123094the Science and Technology Project of Beijing Municipal Commission of Education under Grants No.KM201110028020,No.KM201010028019+1 种基金the National Nature Science Foundation under Grants No.61100205,No.60873001,No.60863011,No.61175068the Fundamental Research Funds for the Central Universities under Grant No.2009RC0212
文摘Taking into account that fuzzy ontology mapping has wide application and cannot be dealt with in many fields at present,a Chinese fuzzy ontology model and a method for Chinese fuzzy ontology mapping are proposed.The mapping discovery between two ontologies is achieved by computing the similarity between the concepts of two ontologies.Every concept consists of four features of concept name,property,instance and structure.First,the algorithms of calculating four individual similarities corresponding to the four features are given.Secondly,the similarity vectors consisting of four weighted individual similarities are built,and the weights are the linear function of harmony and reliability.The similarity vector is used to represent the similarity relation between two concepts which belong to different fuzzy ontolgoies.Lastly,Support Vector Machine(SVM) is used to get the mapping concept pairs by the similarity vectors.Experiment results are satisfactory.
文摘Purpose: Big data offer a huge challenge. Their very existence leads to the contradiction that the more data we have the less accessible they become,as the particular piece of information one is searching for may be buried among terabytes of other data. In this contribution we discuss the origin of big data and point to three challenges when big data arise: Data storage,data processing and generating insights.Design/methodology/approach: Computer-related challenges can be expressed by the CAP theorem which states that it is only possible to simultaneously provide any two of the three following properties in distributed applications: Consistency(C),availability(A) and partition tolerance(P). As an aside we mention Amdahl's law and its application for scientific collaboration. We further discuss data mining in large databases and knowledge representation for handling the results of data mining exercises. We further offer a short informetric study of the field of big data,and point to the ethical dimension of the big data phenomenon.Findings: There still are serious problems to overcome before the field of big data can deliver on its promises.Implications and limitations: This contribution offers a personal view,focusing on the information science aspects,but much more can be said about software aspects.Originality/value: We express the hope that the information scientists,including librarians,will be able to play their full role within the knowledge discovery,data mining and big data communities,leading to exciting developments,the reduction of scientific bottlenecks and really innovative applications.
基金Youth Science and Technology Foundation of Sichuan (No. L080011YF021104)
文摘At present, there has been an increasing interest in neuron-fuzzy systems, the combinations of artificial neural networks with fuzzy logic. In this paper, a definition of fuzzy finite state automata (FFA) is introduced and fuzzy knowledge equivalence representations between neural networks, fuzzy systems and models of automata are discussed. Once the network has been trained, we develop a method to extract a representation of the FFA encoded in the recurrent neural network that recognizes the training rules.