Landslide hazard mapping is a fundamental tool for disaster management activities in Loess terrains. Aiming at major issues with these landslide hazard assessment methods based on Naive Bayesian classification techniq...Landslide hazard mapping is a fundamental tool for disaster management activities in Loess terrains. Aiming at major issues with these landslide hazard assessment methods based on Naive Bayesian classification technique, which is difficult in quantifying those uncertain triggering factors, the main purpose of this work is to evaluate the predictive power of landslide spatial models based on uncertain Naive Bayesian classification method in Baota district of Yan'an city in Shaanxi province, China. Firstly, thematic maps representing various factors that are related to landslide activity were generated. Secondly, by using field data and GIS techniques, a landslide hazard map was performed. To improve the accuracy of the resulting landslide hazard map, the strategies were designed, which quantified the uncertain triggering factor to design landslide spatial models based on uncertain Naive Bayesian classification method named NBU algorithm. The accuracies of the area under relative operating characteristics curves(AUC) in NBU and Naive Bayesian algorithm are 87.29% and 82.47% respectively. Thus, NBU algorithm can be used efficiently for landslide hazard analysis and might be widely used for the prediction of various spatial events based on uncertain classification technique.展开更多
Content extraction of HTML pages is the basis of the web page clustering and information retrieval,so it is necessary to eliminate cluttered information and very important to extract content of pages accurately.A nove...Content extraction of HTML pages is the basis of the web page clustering and information retrieval,so it is necessary to eliminate cluttered information and very important to extract content of pages accurately.A novel and accurate solution for extracting content of HTML pages was proposed.First of all,the HTML page is parsed into DOM object and the IDs of all leaf nodes are generated.Secondly,the score of each leaf node is calculated and the score is adjusted according to the relationship with neighbors.Finally,the information blocks are found according to the definition,and a universal classification algorithm is used to identify the content blocks.The experimental results show that the algorithm can extract content effectively and accurately,and the recall rate and precision are 96.5% and 93.8%,respectively.展开更多
基金Projects(41362015,51164012) supported by the National Natural Science Foundation of ChinaProject(2012AA061901) supported by the National High-tech Research and Development Program of China
文摘Landslide hazard mapping is a fundamental tool for disaster management activities in Loess terrains. Aiming at major issues with these landslide hazard assessment methods based on Naive Bayesian classification technique, which is difficult in quantifying those uncertain triggering factors, the main purpose of this work is to evaluate the predictive power of landslide spatial models based on uncertain Naive Bayesian classification method in Baota district of Yan'an city in Shaanxi province, China. Firstly, thematic maps representing various factors that are related to landslide activity were generated. Secondly, by using field data and GIS techniques, a landslide hazard map was performed. To improve the accuracy of the resulting landslide hazard map, the strategies were designed, which quantified the uncertain triggering factor to design landslide spatial models based on uncertain Naive Bayesian classification method named NBU algorithm. The accuracies of the area under relative operating characteristics curves(AUC) in NBU and Naive Bayesian algorithm are 87.29% and 82.47% respectively. Thus, NBU algorithm can be used efficiently for landslide hazard analysis and might be widely used for the prediction of various spatial events based on uncertain classification technique.
基金Project(2012BAH18B05) supported by the Supporting Program of Ministry of Science and Technology of China
文摘Content extraction of HTML pages is the basis of the web page clustering and information retrieval,so it is necessary to eliminate cluttered information and very important to extract content of pages accurately.A novel and accurate solution for extracting content of HTML pages was proposed.First of all,the HTML page is parsed into DOM object and the IDs of all leaf nodes are generated.Secondly,the score of each leaf node is calculated and the score is adjusted according to the relationship with neighbors.Finally,the information blocks are found according to the definition,and a universal classification algorithm is used to identify the content blocks.The experimental results show that the algorithm can extract content effectively and accurately,and the recall rate and precision are 96.5% and 93.8%,respectively.