Based on the relationship among the geographic events, spatial changes and the database operations, a new automatic (semi-automatic) incremental updating approach of spatio-temporal database (STDB) named as (event-bas...Based on the relationship among the geographic events, spatial changes and the database operations, a new automatic (semi-automatic) incremental updating approach of spatio-temporal database (STDB) named as (event-based) incremental updating (E-BIU) is proposed in this paper. At first, the relationship among the events, spatial changes and the database operations is analyzed, then a total architecture of E-BIU implementation is designed, which includes an event queue, three managers and two sets of rules, each component is presented in detail. The process of the E-BIU of master STDB is described successively. An example of building’s incremental updating is given to illustrate this approach at the end. The result shows that E-BIU is an efficient automatic updating approach for master STDB.展开更多
To solve the problems of shaving and reusing information in the information system, a rules-based ontology constructing approach from object-relational databases is proposed. A 3-tuple ontology constructing model is p...To solve the problems of shaving and reusing information in the information system, a rules-based ontology constructing approach from object-relational databases is proposed. A 3-tuple ontology constructing model is proposed first. Then, four types of ontology constructing rules including class, property, property characteristics, and property restrictions ave formalized according to the model. Experiment results described in Web ontology language prove that our proposed approach is feasible for applying in the semantic objects project of semantic computing laboratory in UC Irvine. Our approach reduces about twenty percent constructing time compared with the ontology construction from relational databases.展开更多
Modular technology can effectively support the rapid design of products, and it is one of the key technologies to realize mass customization design. With the application of product lifecycle management(PLM) system in ...Modular technology can effectively support the rapid design of products, and it is one of the key technologies to realize mass customization design. With the application of product lifecycle management(PLM) system in enterprises, the product lifecycle data have been effectively managed. However, these data have not been fully utilized in module division, especially for complex machinery products. To solve this problem, a product module mining method for the PLM database is proposed to improve the effect of module division. Firstly, product data are extracted from the PLM database by data extraction algorithm. Then, data normalization and structure logical inspection are used to preprocess the extracted defective data. The preprocessed product data are analyzed and expressed in a matrix for module mining. Finally, the fuzzy c-means clustering(FCM) algorithm is used to generate product modules, which are stored in product module library after module marking and post-processing. The feasibility and effectiveness of the proposed method are verified by a case study of high pressure valve.展开更多
With the deepening informationization of Resources & Environment Remote Sensing geological survey conducted,some potential problems and deficiency are:(1) shortage of unified-planed running environment;(2) inconsi...With the deepening informationization of Resources & Environment Remote Sensing geological survey conducted,some potential problems and deficiency are:(1) shortage of unified-planed running environment;(2) inconsistent methods of data integration;and(3) disadvantages of different performing ways of data integration.This paper solves the above problems through overall planning and design,constructs unified running environment, consistent methods of data integration and system structure in order to advance the informationization展开更多
Product family(PF) is the most important part of product platform. A new method is proposed to mine PF based on multi-space product data in PLM database. Product structure tree(PST) and bill of material(BOM) are used ...Product family(PF) is the most important part of product platform. A new method is proposed to mine PF based on multi-space product data in PLM database. Product structure tree(PST) and bill of material(BOM) are used as the data source. A PF can be obtained by mining physics space, logic space and attribute space of product data. In this work, firstly, a PLM database is described, consisting of data organization form, data structure, and data characteristics. Then the PF mining method introduces the sequence alignment techniques used in bio-informatics, which mainly includes data pre-processing, regularization, mining algorithm and cluster analysis. Finally, the feasibility and effectiveness of the proposed method are verified by a case study of high and middle pressure valve, demonstrating a feasible method to obtain PF from PLM database.展开更多
Evolving From principal-subordinate structure of C /S to flexible multileveled distributed structure, i.e. B/S architecture so as to form a wide, distributed and orderly Internet/Intranet integrated management inf orm...Evolving From principal-subordinate structure of C /S to flexible multileveled distributed structure, i.e. B/S architecture so as to form a wide, distributed and orderly Internet/Intranet integrated management inf ormation system, is the trend of development of application software of the whol e world. Advantages and disadvantages of the two modes: C/S and B/S are compared . It is pointed out that at present onefold B/S mode cannot yet fully fullfil th e demands of some complicated data processing, information statistics and analys is, etc, therefore it still awaits further development of technology to achieve 100% Internet. In this paper, a mode of B/S-C/S-blended multileveled architect ure is discussed; the logical levels of B/S mode is studied; Furthermore, combin ing some up-to-date software developing technology, combinatorial design of di fferent database application technology based on B/S is discussed. The expatiati on is made with following points: (1) Organic combination of the network distrib uted object technology of CORBA and the computation technology among different n etwork operating system ─ Java; (2) Organic combination of two modes of ASP and Plug_in; (3) JDBC Server and JDBC Client; (4) An expanded B/S model, i.e. the c lient application of browser communicates directly with Web server, and the serv er application communicates directly with database server through middle ware.展开更多
(a)The architecture of the ionic transport characteristics database;(b)the flow chart of the combination of geometric analysis and BVSE method;(c)the architecture of high-throughput screening platform for solid electr...(a)The architecture of the ionic transport characteristics database;(b)the flow chart of the combination of geometric analysis and BVSE method;(c)the architecture of high-throughput screening platform for solid electrolytes.The blue bidirectional lines indicate dataflow Transport characteristics of ionic conductors play a key role in the performance of electrochemical devices[1-2].Any optimization of the performance of the ionic compounds is inseparable from the understanding of the basic transport characteristics.It has been previously established that ion transport properties are defined by framework transport channel geometry,e.g.bottleneck size,and resulting migration energy[3-4].展开更多
A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environment...A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environments. At the mobile hosts, all transactions perform local pre-validation. The local pre-validation process is carried out against the committed transactions at the server in the last broadcast cycle. Transactions that survive in local pre-validation must be submitted to the server for local final validation. The new protocol eliminates conflicts between mobile read-only and mobile update transactions, and resolves data conflicts flexibly by using multiversion dynamic adjustment of serialization order to avoid unnecessary restarts of transactions. Mobile read-only transactions can be committed with no-blocking, and respond time of mobile read-only transactions is greatly shortened. The tolerance of mobile transactions of disconnections from the broadcast channel is increased. In global validation mobile distributed transactions have to do check to ensure distributed serializability in all participants. The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols in terms of miss rate, restart rate, commit rate. Under high work load (think time is ls) the miss rate of DMVOCC-MVDA is only 14.6%, is significantly lower than that of other protocols. The restart rate of DMVOCC-MVDA is only 32.3%, showing that DMVOCC-MVDA can effectively reduce the restart rate of mobile transactions. And the commit rate of DMVOCC-MVDA is up to 61.2%, which is obviously higher than that of other protocols.展开更多
The present article outlines progress made in designing an intelligent information system for automatic management and knowledge discovery in large numeric and scientific databases, with a validating application to th...The present article outlines progress made in designing an intelligent information system for automatic management and knowledge discovery in large numeric and scientific databases, with a validating application to the CAST-NEONS environmental databases used for ocean modeling and prediction. We describe a discovery-learning process (Automatic Data Analysis System) which combines the features of two machine learning techniques to generate sets of production rules that efficiently describe the observational raw data contained in the database. Data clustering allows the system to classify the raw data into meaningful conceptual clusters, which the system learns by induction to build decision trees, from which are automatically deduced the production rules.展开更多
In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can e...In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can enhance the performance of real-time concurrency control mechanism by reducing the number of transactions that might miss their deadlines, and compare the performance of validation concurrency control protocol with that of HP2PL(High priority two phase locking) protocol and OCC-TI-WAIT-50(Optimistic concurrency control-time interval-wait-50) protocol under shared-disk architecture by simulation. The simulation results reveal that the protocol the author presented can effectively reduce the number of transactions restarting which might miss their deadlines and performs better than HP2PL and OCC-TI-WAIT-50. It works well when arrival rate of transaction is lesser than threshold. However, due to resource contention the percentage of missing deadline increases sharply when arrival rate is greater than the threshold.展开更多
The searching method of spatial information on traditional geo-archives catalog database(TGCD) is based on the text,and the result of retrieval can be only from the text of fields of relational database.The informatio...The searching method of spatial information on traditional geo-archives catalog database(TGCD) is based on the text,and the result of retrieval can be only from the text of fields of relational database.The information queried must be input into the relational database as a text form in advance,otherwise,the visitors would not get any result from it.So。展开更多
OBJECTIVE To construct an integrative database for multi-compound drug discovery.METHODS We designed and constructed a database system,which integrates traditional herbal medicine,functional food,and drug combination ...OBJECTIVE To construct an integrative database for multi-compound drug discovery.METHODS We designed and constructed a database system,which integrates traditional herbal medicine,functional food,and drug combination information.Our database consists of six entity tables,namely drug combinations,functional foods,prescriptions,herbs,compounds and phenotypes.We established strategies for data integration and entity resolution to facilitate heterogeneous information of multi-compound therapies.To standardize the data,instances of entity tables are mapped to international identifiers,and phenotype terms in narrative text are extracted by using the named entity recognition(NER)method.RESULTS The database integrates therapeutic information of traditional herbal medicine,functional foods and combination drugs which is acquired from Traditional Chinese Medicine Information Database(TCM-ID),Food and Drug Administration(FDA)and Drug Combination Database(DCDB).The herb information is mapped to NCBI taxonomy identifiers,and compound information is mapped to PubChem and ChEMBL identifiers for standardization.We also applied MetaMap,a tool for recognizing UMLS concepts from narrative text,to extract phenotype terms.The current version of the database contains 6 291 drug combinations,1 615 functional foods,20 091 prescriptions,8889herbs,227 636 compounds and 11 744 phenotypes.CONCLUSION Our database provides various therapeutic information of multi-compound therapies which serve as a fundamental resource for the polypharmacology research.展开更多
Applying high-speed machining technology in shop floor has many benefits, such as manufacturing more accurate parts with better surface finishes. The selection of the appropriate machining parameters plays a very impo...Applying high-speed machining technology in shop floor has many benefits, such as manufacturing more accurate parts with better surface finishes. The selection of the appropriate machining parameters plays a very important role in the implementation of high-speed machining technology. The case-based reasoning is used in the developing of high-speed machining database to overcome the shortage of available high-speed cutting parameters in machining data handbooks and shop floors. The high-speed machining database developed in this paper includes two main components: the machining database and the case-base. The machining database stores the cutting parameters, cutting tool data, work pieces and their materials data, and other relative data, while the case-base stores mainly the successfully solved cases that are problems of work pieces and their machining. The case description and case retrieval methods are described to establish the case-based reasoning high-speed machining database. With the case retrieval method, some succeeded cases similar to the new machining problem can be retrieved from the case-base. The solution of the most matched case is evaluated and modified, and then it is regarded as the proposed solution to the new machining problem. After verification, the problem and its solution are packed up into a new case, and are stored in the case-base for future applications.展开更多
A compilation of all meaningful historical data of natural-disasters taken place in Alxa of inner-Mongolia is used here for the construction of a 65 Ma high precision database.The data in the database are divided into...A compilation of all meaningful historical data of natural-disasters taken place in Alxa of inner-Mongolia is used here for the construction of a 65 Ma high precision database.The data in the database are divided into subsets according to the types展开更多
This paper introduces a multi-granularity locking model (MGL) for concurrency control in object-oriented database system briefiy, and presents a MGL model formally. Four lockingscheduling algorithms for MGL are propos...This paper introduces a multi-granularity locking model (MGL) for concurrency control in object-oriented database system briefiy, and presents a MGL model formally. Four lockingscheduling algorithms for MGL are proposed in the paper. The ideas of single queue scheduling(SQS) and dual queue scheduling (DQS) are proposed and the algorithm and the performance evaluation for these two scheduling are presented in some paper. This paper describes a new idea of thescheduling for MGL, compatible requests first (CRF). Combining the new idea with SQS and DQS,we propose two new scheduling algorithms called CRFS and CRFD. After describing the simulationmodel, this paper illustrates the comparisons of the performance among these four algorithms. Asshown in the experiments, DQS has better performance than SQS, CRFD is better than DQS, CRFSperforms better than SQS, and CRFS is the best one of these four scheduling algorithms.展开更多
To realize content-hased retrieval of large image databases, it is required to develop an efficient index and retrieval scheme. This paper proposes an index algorithm of clustering called CMA, which supports fast retr...To realize content-hased retrieval of large image databases, it is required to develop an efficient index and retrieval scheme. This paper proposes an index algorithm of clustering called CMA, which supports fast retrieval of large image databases. CMA takes advantages of k-means and self-adaptive algorithms. It is simple and works without any user interactions. There are two main stages in this algorithm. In the first stage, it classifies images in a database into several clusters, and automatically gets the necessary parameters for the next stage-k-means iteration. The CMA algorithm is tested on a large database of more than ten thousand images and compare it with k-means algorithm. Experimental results show that this algorithm is effective in both precision and retrieval time.展开更多
Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database ma...Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, MAC and multigranularity audit. DAC solves the problems o f role inheritance, right contain, authorization identify and cascade revoke, et c; MAC includes subject and object security setup rule, security modify rule and multilevel relation access operation rule, etc; Audit allows making the sub ject, object or operation type as different audit object to implement flexible a nd multigranularity audit method. The model is designed act as a security agent to access daemon database. At present, the model is implemented which runs on th e Windows 2000 environments.展开更多
Recently,thousands of SSR and now SNP markers have been discovered in cotton.Each of these markers provides a valuable molecular tool applying genetic and genomic research to cotton improvement.Cotton DNA marker datab...Recently,thousands of SSR and now SNP markers have been discovered in cotton.Each of these markers provides a valuable molecular tool applying genetic and genomic research to cotton improvement.Cotton DNA marker database(CMD) continues to serve as a molecular marker resource for展开更多
文摘Based on the relationship among the geographic events, spatial changes and the database operations, a new automatic (semi-automatic) incremental updating approach of spatio-temporal database (STDB) named as (event-based) incremental updating (E-BIU) is proposed in this paper. At first, the relationship among the events, spatial changes and the database operations is analyzed, then a total architecture of E-BIU implementation is designed, which includes an event queue, three managers and two sets of rules, each component is presented in detail. The process of the E-BIU of master STDB is described successively. An example of building’s incremental updating is given to illustrate this approach at the end. The result shows that E-BIU is an efficient automatic updating approach for master STDB.
基金supported by the National Natural Science Foundation of China (60471055)the National "863" High Technology Research and Development Program of China (2007AA01Z443)
文摘To solve the problems of shaving and reusing information in the information system, a rules-based ontology constructing approach from object-relational databases is proposed. A 3-tuple ontology constructing model is proposed first. Then, four types of ontology constructing rules including class, property, property characteristics, and property restrictions ave formalized according to the model. Experiment results described in Web ontology language prove that our proposed approach is feasible for applying in the semantic objects project of semantic computing laboratory in UC Irvine. Our approach reduces about twenty percent constructing time compared with the ontology construction from relational databases.
基金Project(51275362)supported by the National Natural Science Foundation of ChinaProject(2013M542055)supported by China Postdoctoral Science Foundation Funded
文摘Modular technology can effectively support the rapid design of products, and it is one of the key technologies to realize mass customization design. With the application of product lifecycle management(PLM) system in enterprises, the product lifecycle data have been effectively managed. However, these data have not been fully utilized in module division, especially for complex machinery products. To solve this problem, a product module mining method for the PLM database is proposed to improve the effect of module division. Firstly, product data are extracted from the PLM database by data extraction algorithm. Then, data normalization and structure logical inspection are used to preprocess the extracted defective data. The preprocessed product data are analyzed and expressed in a matrix for module mining. Finally, the fuzzy c-means clustering(FCM) algorithm is used to generate product modules, which are stored in product module library after module marking and post-processing. The feasibility and effectiveness of the proposed method are verified by a case study of high pressure valve.
文摘With the deepening informationization of Resources & Environment Remote Sensing geological survey conducted,some potential problems and deficiency are:(1) shortage of unified-planed running environment;(2) inconsistent methods of data integration;and(3) disadvantages of different performing ways of data integration.This paper solves the above problems through overall planning and design,constructs unified running environment, consistent methods of data integration and system structure in order to advance the informationization
基金Project(51275362)supported by the National Natural Science Foundation of ChinaProject(2014ZX04015021)supported by National Science and Technology Major Project,China
文摘Product family(PF) is the most important part of product platform. A new method is proposed to mine PF based on multi-space product data in PLM database. Product structure tree(PST) and bill of material(BOM) are used as the data source. A PF can be obtained by mining physics space, logic space and attribute space of product data. In this work, firstly, a PLM database is described, consisting of data organization form, data structure, and data characteristics. Then the PF mining method introduces the sequence alignment techniques used in bio-informatics, which mainly includes data pre-processing, regularization, mining algorithm and cluster analysis. Finally, the feasibility and effectiveness of the proposed method are verified by a case study of high and middle pressure valve, demonstrating a feasible method to obtain PF from PLM database.
文摘Evolving From principal-subordinate structure of C /S to flexible multileveled distributed structure, i.e. B/S architecture so as to form a wide, distributed and orderly Internet/Intranet integrated management inf ormation system, is the trend of development of application software of the whol e world. Advantages and disadvantages of the two modes: C/S and B/S are compared . It is pointed out that at present onefold B/S mode cannot yet fully fullfil th e demands of some complicated data processing, information statistics and analys is, etc, therefore it still awaits further development of technology to achieve 100% Internet. In this paper, a mode of B/S-C/S-blended multileveled architect ure is discussed; the logical levels of B/S mode is studied; Furthermore, combin ing some up-to-date software developing technology, combinatorial design of di fferent database application technology based on B/S is discussed. The expatiati on is made with following points: (1) Organic combination of the network distrib uted object technology of CORBA and the computation technology among different n etwork operating system ─ Java; (2) Organic combination of two modes of ASP and Plug_in; (3) JDBC Server and JDBC Client; (4) An expanded B/S model, i.e. the c lient application of browser communicates directly with Web server, and the serv er application communicates directly with database server through middle ware.
文摘(a)The architecture of the ionic transport characteristics database;(b)the flow chart of the combination of geometric analysis and BVSE method;(c)the architecture of high-throughput screening platform for solid electrolytes.The blue bidirectional lines indicate dataflow Transport characteristics of ionic conductors play a key role in the performance of electrochemical devices[1-2].Any optimization of the performance of the ionic compounds is inseparable from the understanding of the basic transport characteristics.It has been previously established that ion transport properties are defined by framework transport channel geometry,e.g.bottleneck size,and resulting migration energy[3-4].
基金Project(20030533011)supported by the National Research Foundation for the Doctoral Program of Higher Education of China
文摘A DMVOCC-MVDA (distributed multiversion optimistic concurrency control with multiversion dynamic adjustment) protocol was presented to process mobile distributed real-time transaction in mobile broadcast environments. At the mobile hosts, all transactions perform local pre-validation. The local pre-validation process is carried out against the committed transactions at the server in the last broadcast cycle. Transactions that survive in local pre-validation must be submitted to the server for local final validation. The new protocol eliminates conflicts between mobile read-only and mobile update transactions, and resolves data conflicts flexibly by using multiversion dynamic adjustment of serialization order to avoid unnecessary restarts of transactions. Mobile read-only transactions can be committed with no-blocking, and respond time of mobile read-only transactions is greatly shortened. The tolerance of mobile transactions of disconnections from the broadcast channel is increased. In global validation mobile distributed transactions have to do check to ensure distributed serializability in all participants. The simulation results show that the new concurrency control protocol proposed offers better performance than other protocols in terms of miss rate, restart rate, commit rate. Under high work load (think time is ls) the miss rate of DMVOCC-MVDA is only 14.6%, is significantly lower than that of other protocols. The restart rate of DMVOCC-MVDA is only 32.3%, showing that DMVOCC-MVDA can effectively reduce the restart rate of mobile transactions. And the commit rate of DMVOCC-MVDA is up to 61.2%, which is obviously higher than that of other protocols.
文摘The present article outlines progress made in designing an intelligent information system for automatic management and knowledge discovery in large numeric and scientific databases, with a validating application to the CAST-NEONS environmental databases used for ocean modeling and prediction. We describe a discovery-learning process (Automatic Data Analysis System) which combines the features of two machine learning techniques to generate sets of production rules that efficiently describe the observational raw data contained in the database. Data clustering allows the system to classify the raw data into meaningful conceptual clusters, which the system learns by induction to build decision trees, from which are automatically deduced the production rules.
文摘In parallel real-time database systems, concurrency control protocols must satisfy time constraints as well as the integrity constraints. The authors present a validation concurrency control(VCC) protocol, which can enhance the performance of real-time concurrency control mechanism by reducing the number of transactions that might miss their deadlines, and compare the performance of validation concurrency control protocol with that of HP2PL(High priority two phase locking) protocol and OCC-TI-WAIT-50(Optimistic concurrency control-time interval-wait-50) protocol under shared-disk architecture by simulation. The simulation results reveal that the protocol the author presented can effectively reduce the number of transactions restarting which might miss their deadlines and performs better than HP2PL and OCC-TI-WAIT-50. It works well when arrival rate of transaction is lesser than threshold. However, due to resource contention the percentage of missing deadline increases sharply when arrival rate is greater than the threshold.
文摘The searching method of spatial information on traditional geo-archives catalog database(TGCD) is based on the text,and the result of retrieval can be only from the text of fields of relational database.The information queried must be input into the relational database as a text form in advance,otherwise,the visitors would not get any result from it.So。
基金The project supported by the Bio-Synergy Research Project(NRF-2012M3A9C4048758)of the Ministry of Science,ICT and Future Planning through the National Research Foundation
文摘OBJECTIVE To construct an integrative database for multi-compound drug discovery.METHODS We designed and constructed a database system,which integrates traditional herbal medicine,functional food,and drug combination information.Our database consists of six entity tables,namely drug combinations,functional foods,prescriptions,herbs,compounds and phenotypes.We established strategies for data integration and entity resolution to facilitate heterogeneous information of multi-compound therapies.To standardize the data,instances of entity tables are mapped to international identifiers,and phenotype terms in narrative text are extracted by using the named entity recognition(NER)method.RESULTS The database integrates therapeutic information of traditional herbal medicine,functional foods and combination drugs which is acquired from Traditional Chinese Medicine Information Database(TCM-ID),Food and Drug Administration(FDA)and Drug Combination Database(DCDB).The herb information is mapped to NCBI taxonomy identifiers,and compound information is mapped to PubChem and ChEMBL identifiers for standardization.We also applied MetaMap,a tool for recognizing UMLS concepts from narrative text,to extract phenotype terms.The current version of the database contains 6 291 drug combinations,1 615 functional foods,20 091 prescriptions,8889herbs,227 636 compounds and 11 744 phenotypes.CONCLUSION Our database provides various therapeutic information of multi-compound therapies which serve as a fundamental resource for the polypharmacology research.
文摘Applying high-speed machining technology in shop floor has many benefits, such as manufacturing more accurate parts with better surface finishes. The selection of the appropriate machining parameters plays a very important role in the implementation of high-speed machining technology. The case-based reasoning is used in the developing of high-speed machining database to overcome the shortage of available high-speed cutting parameters in machining data handbooks and shop floors. The high-speed machining database developed in this paper includes two main components: the machining database and the case-base. The machining database stores the cutting parameters, cutting tool data, work pieces and their materials data, and other relative data, while the case-base stores mainly the successfully solved cases that are problems of work pieces and their machining. The case description and case retrieval methods are described to establish the case-based reasoning high-speed machining database. With the case retrieval method, some succeeded cases similar to the new machining problem can be retrieved from the case-base. The solution of the most matched case is evaluated and modified, and then it is regarded as the proposed solution to the new machining problem. After verification, the problem and its solution are packed up into a new case, and are stored in the case-base for future applications.
文摘A compilation of all meaningful historical data of natural-disasters taken place in Alxa of inner-Mongolia is used here for the construction of a 65 Ma high precision database.The data in the database are divided into subsets according to the types
文摘This paper introduces a multi-granularity locking model (MGL) for concurrency control in object-oriented database system briefiy, and presents a MGL model formally. Four lockingscheduling algorithms for MGL are proposed in the paper. The ideas of single queue scheduling(SQS) and dual queue scheduling (DQS) are proposed and the algorithm and the performance evaluation for these two scheduling are presented in some paper. This paper describes a new idea of thescheduling for MGL, compatible requests first (CRF). Combining the new idea with SQS and DQS,we propose two new scheduling algorithms called CRFS and CRFD. After describing the simulationmodel, this paper illustrates the comparisons of the performance among these four algorithms. Asshown in the experiments, DQS has better performance than SQS, CRFD is better than DQS, CRFSperforms better than SQS, and CRFS is the best one of these four scheduling algorithms.
基金This project was supported by National High Tech Foundation of 863 (2001AA115123)
文摘To realize content-hased retrieval of large image databases, it is required to develop an efficient index and retrieval scheme. This paper proposes an index algorithm of clustering called CMA, which supports fast retrieval of large image databases. CMA takes advantages of k-means and self-adaptive algorithms. It is simple and works without any user interactions. There are two main stages in this algorithm. In the first stage, it classifies images in a database into several clusters, and automatically gets the necessary parameters for the next stage-k-means iteration. The CMA algorithm is tested on a large database of more than ten thousand images and compare it with k-means algorithm. Experimental results show that this algorithm is effective in both precision and retrieval time.
文摘Engine engineering database system is an oriented C AD applied database management system that has the capability managing distributed data. The paper discusses the security issue of the engine engineering database management system (EDBMS). Through studying and analyzing the database security, to draw a series of securi ty rules, which reach B1, level security standard. Which includes discretionary access control (DAC), mandatory access control (MAC) and audit. The EDBMS implem ents functions of DAC, MAC and multigranularity audit. DAC solves the problems o f role inheritance, right contain, authorization identify and cascade revoke, et c; MAC includes subject and object security setup rule, security modify rule and multilevel relation access operation rule, etc; Audit allows making the sub ject, object or operation type as different audit object to implement flexible a nd multigranularity audit method. The model is designed act as a security agent to access daemon database. At present, the model is implemented which runs on th e Windows 2000 environments.
文摘Recently,thousands of SSR and now SNP markers have been discovered in cotton.Each of these markers provides a valuable molecular tool applying genetic and genomic research to cotton improvement.Cotton DNA marker database(CMD) continues to serve as a molecular marker resource for