Malignant tumours always threaten human health.For tumour diagnosis,positron emission tomography(PET)is the most sensitive and advanced imaging technique by radiotracers,such as radioactive^(18)F,^(11)C,^(64)Cu,^(68)G...Malignant tumours always threaten human health.For tumour diagnosis,positron emission tomography(PET)is the most sensitive and advanced imaging technique by radiotracers,such as radioactive^(18)F,^(11)C,^(64)Cu,^(68)Ga,and^(89)Zr.Among the radiotracers,the radioactive^(18)F-labelled chemical agent as PET probes plays a predominant role in monitoring,detecting,treating,and predicting tumours due to its perfect half-life.In this paper,the^(18)F-labelled chemical materials as PET probes are systematically summarized.First,we introduce various radionuclides of PET and elaborate on the mechanism of PET imaging.It highlights the^(18)F-labelled chemical agents used as PET probes,including[^(18)F]-2-deoxy-2-[^(18)F]fluoro-D-glucose([^(18)F]-FDG),^(18)F-labelled amino acids,^(18)F-labelled nucleic acids,^(18)F-labelled receptors,^(18)F-labelled reporter genes,and^(18)F-labelled hypoxia agents.In addition,some PET probes with metal as a supplementary element are introduced briefly.Meanwhile,the^(18)F-labelled nanoparticles for the PET probe and the multi-modality imaging probe are summarized in detail.The approach and strategies for the fabrication of^(18)F-labelled PET probes are also described briefly.The future development of the PET probe is also prospected.The development and application of^(18)F-labelled PET probes will expand our knowledge and shed light on the diagnosis and theranostics of tumours.展开更多
To extract and display the significant information of combat systems,this paper introduces the methodology of functional cartography into combat networks and proposes an integrated framework named“functional cartogra...To extract and display the significant information of combat systems,this paper introduces the methodology of functional cartography into combat networks and proposes an integrated framework named“functional cartography of heterogeneous combat networks based on the operational chain”(FCBOC).In this framework,a functional module detection algorithm named operational chain-based label propagation algorithm(OCLPA),which considers the cooperation and interactions among combat entities and can thus naturally tackle network heterogeneity,is proposed to identify the functional modules of the network.Then,the nodes and their modules are classified into different roles according to their properties.A case study shows that FCBOC can provide a simplified description of disorderly information of combat networks and enable us to identify their functional and structural network characteristics.The results provide useful information to help commanders make precise and accurate decisions regarding the protection,disintegration or optimization of combat networks.Three algorithms are also compared with OCLPA to show that FCBOC can most effectively find functional modules with practical meaning.展开更多
A generalized labeled multi-Bernoulli(GLMB)filter with motion mode label based on the track-before-detect(TBD)strategy for maneuvering targets in sea clutter with heavy tail,in which the transitions of the mode of tar...A generalized labeled multi-Bernoulli(GLMB)filter with motion mode label based on the track-before-detect(TBD)strategy for maneuvering targets in sea clutter with heavy tail,in which the transitions of the mode of target motions are modeled by using jump Markovian system(JMS),is presented in this paper.The close-form solution is derived for sequential Monte Carlo implementation of the GLMB filter based on the TBD model.In update,we derive a tractable GLMB density,which preserves the cardinality distribution and first-order moment of the labeled multi-target distribution of interest as well as minimizes the Kullback-Leibler divergence(KLD),to enable the next recursive cycle.The relevant simulation results prove that the proposed multiple-model GLMB-TBD(MM-GLMB-TBD)algorithm based on K-distributed clutter model can improve the detecting and tracking performance in both estimation error and robustness compared with state-of-the-art algorithms for sea clutter background.Additionally,the simulations show that the proposed MM-GLMB-TBD algorithm can accurately output the multitarget trajectories with considerably less computational complexity compared with the adapted dynamic programming based TBD(DP-TBD)algorithm.Meanwhile,the simulation results also indicate that the proposed MM-GLMB-TBD filter slightly outperforms the JMS particle filter based TBD(JMSMeMBer-TBD)filter in estimation error with the basically same computational cost.Finally,the impact of the mismatches on the clutter model and clutter parameter is investigated for the performance of the MM-GLMB-TBD filter.展开更多
High resolution cameras and multi camera systems are being used in areas of video surveillance like security of public places, traffic monitoring, and military and satellite imaging. This leads to a demand for computa...High resolution cameras and multi camera systems are being used in areas of video surveillance like security of public places, traffic monitoring, and military and satellite imaging. This leads to a demand for computational algorithms for real time processing of high resolution videos. Motion detection and background separation play a vital role in capturing the object of interest in surveillance videos, but as we move towards high resolution cameras, the time-complexity of the algorithm increases and thus fails to be a part of real time systems. Parallel architecture provides a surpass platform to work efficiently with complex algorithmic solutions. In this work, a method was proposed for identifying the moving objects perfectly in the videos using adaptive background making, motion detection and object estimation. The pre-processing part includes an adaptive block background making model and a dynamically adaptive thresholding technique to estimate the moving objects. The post processing includes a competent parallel connected component labelling algorithm to estimate perfectly the objects of interest. New parallel processing strategies are developed on each stage of the algorithm to reduce the time-complexity of the system. This algorithm has achieved a average speedup of 12.26 times for lower resolution video frames(320×240, 720×480, 1024×768) and 7.30 times for higher resolution video frames(1360×768, 1920×1080, 2560×1440) on GPU, which is superior to CPU processing. Also, this algorithm was tested by changing the number of threads in a thread block and the minimum execution time has been achieved for 16×16 thread block. And this algorithm was tested on a night sequence where the amount of light in the scene is very less and still the algorithm has given a significant speedup and accuracy in determining the object.展开更多
Group decision making problems are investigated with uncertain multiplicative linguistic preference relations.An unbalanced multiplicative linguistic label set is introduced,which can be used by the experts to express...Group decision making problems are investigated with uncertain multiplicative linguistic preference relations.An unbalanced multiplicative linguistic label set is introduced,which can be used by the experts to express their linguistic preference information over alternatives.The uncertain linguistic weighted geometric mean operator is utilized to aggregate all the individual uncertain multiplicative linguistic preference relations into a collective one,and then a simple approach is developed to determine the experts' weights by utilizing the consensus degrees among the individual uncertain multiplicative linguistic preference relations and the collective uncertain multiplicative linguistic preference relations.Furthermore,a practical interactive procedure for group decision making is proposed based on uncertain multiplicative linguistic preference relations,in which a possibility degree formula and a complementary matrix are used to rank the given alternatives.Finally,the proposed procedure is applied to solve the group decision making problem of a manufacturing company searching the best global supplier for one of its most critical parts used in assembling process.展开更多
Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the detai...Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the details of coronary calcification in vivo.In routine clinical practice,it is a time-consuming and laborious task for clinicians to review the over 250 images in a single pullback.Besides,the imbalance label distribution within the entire pullbacks is another problem,which could lead to the failure of the classifier model.Given the success of deep learning methods with other imaging modalities,a thorough understanding of calcified plaque detection using Convolutional Neural Networks(CNNs)within pullbacks for future clinical decision was required.Methods All 33 IVOCT clinical pullbacks of 33 patients were taken from Affiliated Drum Tower Hospital,Nanjing University between December 2017 and December 2018.For ground-truth annotation,three trained experts determined the type of plaque that was present in a B-Scan.The experts assigned the labels'no calcified plaque','calcified plaque'for each OCT image.All experts were provided the all images for labeling.The final label was determined based on consensus between the experts,different opinions on the plaque type were resolved by asking the experts for a repetition of their evaluation.Before the implement of algorithm,all OCT images was resized to a resolution of 300×300,which matched the range used with standard architectures in the natural image domain.In the study,we randomly selected 26 pullbacks for training,the remaining data were testing.While,imbalance label distribution within entire pullbacks was great challenge for various CNNs architecture.In order to resolve the problem,we designed the following experiment.First,we fine-tuned twenty different CNNs architecture,including customize CNN architectures and pretrained CNN architectures.Considering the nature of OCT images,customize CNN architectures were designed that the layers were fewer than 25 layers.Then,three with good performance were selected and further deep fine-tuned to train three different models.The difference of CNNs was mainly in the model architecture,such as depth-based residual networks,width-based inception networks.Finally,the three CNN models were used to majority voting,the predicted labels were from the most voting.Areas under the receiver operating characteristic curve(ROC AUC)were used as the evaluation metric for the imbalance label distribution.Results The imbalance label distribution within pullbacks affected both convergence during the training phase and generalization of a CNN model.Different labels of OCT images could be classified with excellent performance by fine tuning parameters of CNN architectures.Overall,we find that our final result performed best with an accuracy of 90%of'calcified plaque'class,which the numbers were less than'no calcified plaque'class in one pullback.Conclusions The obtained results showed that the method is fast and effective to classify calcific plaques with imbalance label distribution in each pullback.The results suggest that the proposed method could be facilitating our understanding of coronary artery calcification in the process of atherosclerosis andhelping guide complex interventional strategies in coronary arteries with superficial calcification.展开更多
The development of image classification is one of the most important research topics in remote sensing. The prediction accuracy depends not only on the appropriate choice of the machine learning method but also on the...The development of image classification is one of the most important research topics in remote sensing. The prediction accuracy depends not only on the appropriate choice of the machine learning method but also on the quality of the training datasets. However, real-world data is not perfect and often suffers from noise. This paper gives an overview of noise filtering methods. Firstly, the types of noise and the consequences of class noise on machine learning are presented. Secondly, class noise handling methods at both the data level and the algorithm level are introduced. Then ensemble-based class noise handling methods including class noise removal, correction, and noise robust ensemble learners are presented. Finally, a summary of existing data-cleaning techniques is given.展开更多
Labeling of the connected components is the key operation of the target recognition and segmentation in remote sensing images.The conventional connected-component labeling(CCL) algorithms for ordinary optical images a...Labeling of the connected components is the key operation of the target recognition and segmentation in remote sensing images.The conventional connected-component labeling(CCL) algorithms for ordinary optical images are considered time-consuming in processing the remote sensing images because of the larger size.A dynamic run-length based CCL algorithm(Dy RLC) is proposed in this paper for the large size,big granularity sparse remote sensing image,such as space debris images and ship images.In addition,the equivalence matrix method is proposed to help design the pre-processing method to accelerate the equivalence labels resolving.The result shows our algorithm outperforms 22.86% on execution time than the other algorithms in space debris image dataset.The proposed algorithm also can be implemented on the field programming logical array(FPGA) to enable the realization of the real-time processing on-board.展开更多
The concept and advantage of reconfigurable technology is introduced. A kind of processor architecture of re configurable macro processor (RMP) model based on FPGA array and DSP is put forward and has been implemented...The concept and advantage of reconfigurable technology is introduced. A kind of processor architecture of re configurable macro processor (RMP) model based on FPGA array and DSP is put forward and has been implemented. Two image algorithms are developed: template-based automatic target recognition and zone labeling. One is estimating for motion direction in the infrared image background, another is line picking-up algorithm based on image zone labeling and phase grouping technique. It is a kind of 'hardware' function that can be called by the DSP in high-level algorithm. It is also a kind of hardware algorithm of the DSP. The results of experiments show the reconfigurable computing technology based on RMP is an ideal accelerating means to deal with the high-speed image processing tasks. High real time performance is obtained in our two applications on RMP.展开更多
Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other seman...Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other semantic information such as semantic collocation and semantic category. Some improvements on this distinctive parser are presented. Firstly, "valency" is an essential semantic feature of words. Once the valency of word is determined, the collocation of the word is clear, and the sentence structure can be directly derived. Thus, a syntactic parsing model combining valence structure with semantic dependency is purposed on the base of head-driven statistical syntactic parsing models. Secondly, semantic role labeling(SRL) is very necessary for deep natural language processing. An integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Experiments are conducted for the refined statistical parser. The results show that 87.12% precision and 85.04% recall are obtained, and F measure is improved by 5.68% compared with the head-driven parsing model introduced by Collins.展开更多
Non-rigid point matching has received more and more attention.Recently,many works have been developed to discover global relationships in the point set which is treated as an instance of a joint distribution.However,t...Non-rigid point matching has received more and more attention.Recently,many works have been developed to discover global relationships in the point set which is treated as an instance of a joint distribution.However,the local relationship among neighboring points is more effective under non-rigid transformations.Thus,a new algorithm taking advantage of shape context and relaxation labeling technique,called SC-RL,is proposed for non-rigid point matching.It is a strategy that joints estimation for correspondences as well as the transformation.In this work,correspondence assignment is treated as a soft-assign process in which the matching probability is updated by relaxation labeling technique with a newly defined compatibility coefficient.The compatibility coefficient is one or zero depending on whether neighboring points preserving their relative position in a local coordinate system.The comparative analysis has been performed against four state-of-the-art algorithms including SC,ICP,TPS-RPM and RPM-LNS,and the results denote that SC-RL performs better in the presence of deformations,outliers and noise.展开更多
To recognize circular objects rapidly in satellite remote sensing imagery, an approach using their geometry properties is presented. The original image is segmented to be a binary one by one dimension maximum entropy ...To recognize circular objects rapidly in satellite remote sensing imagery, an approach using their geometry properties is presented. The original image is segmented to be a binary one by one dimension maximum entropy threshold algorithm and the binary image is labeled with an algorithm based on recursion technique. Then, shape parameters of all labeled regions are calculated and those regions with shape parameters satisfying certain conditions are recognized as circular objects. The algorithm is described in detail, and comparison experiments with the randomized Hough transformation (RHT) are also provided. The experimental results on synthetic images and real images show that the proposed method has the merits of fast recognition rate, high recognition efficiency and the ability of anti-noise and anti-jamming. In addition, the method performs well when some circular objects are little deformed and partly misshapen.展开更多
To improve the resource utilization ratio and shorten the recovery time of the shared path protection with differentiated reliability (SPP-DiR) algorithm, an algorithm called dynamic shared segment protection with d...To improve the resource utilization ratio and shorten the recovery time of the shared path protection with differentiated reliability (SPP-DiR) algorithm, an algorithm called dynamic shared segment protection with differentiated reliability (DSSP-DiR) is proposed for survivable GMPLS networks. In the proposed algorithm, a primary path is dynamically divided into several segments according to the differentiated reliability requirements of the customers. In the SPP-DiR algorithm, the whole primary path should be protected, while in the DSSP- DiR algorithm, only partial segments on the primary path need to be protected, which can reduce more backup bandwidths than that in the SPP-DiR algorithm. Simulation results show that the DSSP-DiR algorithm achieves higher resource utilization ratio, lower protection failure probability, and shorter recovery time than the SPP-DiR algorithm.展开更多
基金National Natural Science Foundation of China(No.10671074 and No.60673048)Natural Science Foundation of Education Ministry of Anhui Province(No.KJ2007B124 and No.2006KJ256B)
文摘Malignant tumours always threaten human health.For tumour diagnosis,positron emission tomography(PET)is the most sensitive and advanced imaging technique by radiotracers,such as radioactive^(18)F,^(11)C,^(64)Cu,^(68)Ga,and^(89)Zr.Among the radiotracers,the radioactive^(18)F-labelled chemical agent as PET probes plays a predominant role in monitoring,detecting,treating,and predicting tumours due to its perfect half-life.In this paper,the^(18)F-labelled chemical materials as PET probes are systematically summarized.First,we introduce various radionuclides of PET and elaborate on the mechanism of PET imaging.It highlights the^(18)F-labelled chemical agents used as PET probes,including[^(18)F]-2-deoxy-2-[^(18)F]fluoro-D-glucose([^(18)F]-FDG),^(18)F-labelled amino acids,^(18)F-labelled nucleic acids,^(18)F-labelled receptors,^(18)F-labelled reporter genes,and^(18)F-labelled hypoxia agents.In addition,some PET probes with metal as a supplementary element are introduced briefly.Meanwhile,the^(18)F-labelled nanoparticles for the PET probe and the multi-modality imaging probe are summarized in detail.The approach and strategies for the fabrication of^(18)F-labelled PET probes are also described briefly.The future development of the PET probe is also prospected.The development and application of^(18)F-labelled PET probes will expand our knowledge and shed light on the diagnosis and theranostics of tumours.
文摘To extract and display the significant information of combat systems,this paper introduces the methodology of functional cartography into combat networks and proposes an integrated framework named“functional cartography of heterogeneous combat networks based on the operational chain”(FCBOC).In this framework,a functional module detection algorithm named operational chain-based label propagation algorithm(OCLPA),which considers the cooperation and interactions among combat entities and can thus naturally tackle network heterogeneity,is proposed to identify the functional modules of the network.Then,the nodes and their modules are classified into different roles according to their properties.A case study shows that FCBOC can provide a simplified description of disorderly information of combat networks and enable us to identify their functional and structural network characteristics.The results provide useful information to help commanders make precise and accurate decisions regarding the protection,disintegration or optimization of combat networks.Three algorithms are also compared with OCLPA to show that FCBOC can most effectively find functional modules with practical meaning.
基金supported by the Fund for Foreign Scholars in University Research and Teaching Programs(B18039)Shaanxi Youth Fund(202J-JC-QN-0668).
文摘A generalized labeled multi-Bernoulli(GLMB)filter with motion mode label based on the track-before-detect(TBD)strategy for maneuvering targets in sea clutter with heavy tail,in which the transitions of the mode of target motions are modeled by using jump Markovian system(JMS),is presented in this paper.The close-form solution is derived for sequential Monte Carlo implementation of the GLMB filter based on the TBD model.In update,we derive a tractable GLMB density,which preserves the cardinality distribution and first-order moment of the labeled multi-target distribution of interest as well as minimizes the Kullback-Leibler divergence(KLD),to enable the next recursive cycle.The relevant simulation results prove that the proposed multiple-model GLMB-TBD(MM-GLMB-TBD)algorithm based on K-distributed clutter model can improve the detecting and tracking performance in both estimation error and robustness compared with state-of-the-art algorithms for sea clutter background.Additionally,the simulations show that the proposed MM-GLMB-TBD algorithm can accurately output the multitarget trajectories with considerably less computational complexity compared with the adapted dynamic programming based TBD(DP-TBD)algorithm.Meanwhile,the simulation results also indicate that the proposed MM-GLMB-TBD filter slightly outperforms the JMS particle filter based TBD(JMSMeMBer-TBD)filter in estimation error with the basically same computational cost.Finally,the impact of the mismatches on the clutter model and clutter parameter is investigated for the performance of the MM-GLMB-TBD filter.
文摘High resolution cameras and multi camera systems are being used in areas of video surveillance like security of public places, traffic monitoring, and military and satellite imaging. This leads to a demand for computational algorithms for real time processing of high resolution videos. Motion detection and background separation play a vital role in capturing the object of interest in surveillance videos, but as we move towards high resolution cameras, the time-complexity of the algorithm increases and thus fails to be a part of real time systems. Parallel architecture provides a surpass platform to work efficiently with complex algorithmic solutions. In this work, a method was proposed for identifying the moving objects perfectly in the videos using adaptive background making, motion detection and object estimation. The pre-processing part includes an adaptive block background making model and a dynamically adaptive thresholding technique to estimate the moving objects. The post processing includes a competent parallel connected component labelling algorithm to estimate perfectly the objects of interest. New parallel processing strategies are developed on each stage of the algorithm to reduce the time-complexity of the system. This algorithm has achieved a average speedup of 12.26 times for lower resolution video frames(320×240, 720×480, 1024×768) and 7.30 times for higher resolution video frames(1360×768, 1920×1080, 2560×1440) on GPU, which is superior to CPU processing. Also, this algorithm was tested by changing the number of threads in a thread block and the minimum execution time has been achieved for 16×16 thread block. And this algorithm was tested on a night sequence where the amount of light in the scene is very less and still the algorithm has given a significant speedup and accuracy in determining the object.
基金supported by the National Natural Science Foundation of China (70571087)the National Science Fund for Distinguished Young Scholars of China (70625005)
文摘Group decision making problems are investigated with uncertain multiplicative linguistic preference relations.An unbalanced multiplicative linguistic label set is introduced,which can be used by the experts to express their linguistic preference information over alternatives.The uncertain linguistic weighted geometric mean operator is utilized to aggregate all the individual uncertain multiplicative linguistic preference relations into a collective one,and then a simple approach is developed to determine the experts' weights by utilizing the consensus degrees among the individual uncertain multiplicative linguistic preference relations and the collective uncertain multiplicative linguistic preference relations.Furthermore,a practical interactive procedure for group decision making is proposed based on uncertain multiplicative linguistic preference relations,in which a possibility degree formula and a complementary matrix are used to rank the given alternatives.Finally,the proposed procedure is applied to solve the group decision making problem of a manufacturing company searching the best global supplier for one of its most critical parts used in assembling process.
基金supported in part by the National Natural Science Foundation of China ( NSFC ) ( 11772093)ARC ( FT140101152)
文摘Background Coronary artery calcification is a well-known marker of atherosclerotic plaque burden.High-resolution intravascular optical coherence tomography(OCT)imaging has shown the potential to characterize the details of coronary calcification in vivo.In routine clinical practice,it is a time-consuming and laborious task for clinicians to review the over 250 images in a single pullback.Besides,the imbalance label distribution within the entire pullbacks is another problem,which could lead to the failure of the classifier model.Given the success of deep learning methods with other imaging modalities,a thorough understanding of calcified plaque detection using Convolutional Neural Networks(CNNs)within pullbacks for future clinical decision was required.Methods All 33 IVOCT clinical pullbacks of 33 patients were taken from Affiliated Drum Tower Hospital,Nanjing University between December 2017 and December 2018.For ground-truth annotation,three trained experts determined the type of plaque that was present in a B-Scan.The experts assigned the labels'no calcified plaque','calcified plaque'for each OCT image.All experts were provided the all images for labeling.The final label was determined based on consensus between the experts,different opinions on the plaque type were resolved by asking the experts for a repetition of their evaluation.Before the implement of algorithm,all OCT images was resized to a resolution of 300×300,which matched the range used with standard architectures in the natural image domain.In the study,we randomly selected 26 pullbacks for training,the remaining data were testing.While,imbalance label distribution within entire pullbacks was great challenge for various CNNs architecture.In order to resolve the problem,we designed the following experiment.First,we fine-tuned twenty different CNNs architecture,including customize CNN architectures and pretrained CNN architectures.Considering the nature of OCT images,customize CNN architectures were designed that the layers were fewer than 25 layers.Then,three with good performance were selected and further deep fine-tuned to train three different models.The difference of CNNs was mainly in the model architecture,such as depth-based residual networks,width-based inception networks.Finally,the three CNN models were used to majority voting,the predicted labels were from the most voting.Areas under the receiver operating characteristic curve(ROC AUC)were used as the evaluation metric for the imbalance label distribution.Results The imbalance label distribution within pullbacks affected both convergence during the training phase and generalization of a CNN model.Different labels of OCT images could be classified with excellent performance by fine tuning parameters of CNN architectures.Overall,we find that our final result performed best with an accuracy of 90%of'calcified plaque'class,which the numbers were less than'no calcified plaque'class in one pullback.Conclusions The obtained results showed that the method is fast and effective to classify calcific plaques with imbalance label distribution in each pullback.The results suggest that the proposed method could be facilitating our understanding of coronary artery calcification in the process of atherosclerosis andhelping guide complex interventional strategies in coronary arteries with superficial calcification.
基金supported by the National Natural Science Foundation of China (62201438,61772397,12005169)the Basic Research Program of Natural Sciences of Shaanxi Province (2021JC-23)+2 种基金Yulin Science and Technology Bureau Science and Technology Development Special Project (CXY-2020-094)Shaanxi Forestry Science and Technology Innovation Key Project (SXLK2022-02-8)the Project of Shaanxi F ederation of Social Sciences (2022HZ1759)。
文摘The development of image classification is one of the most important research topics in remote sensing. The prediction accuracy depends not only on the appropriate choice of the machine learning method but also on the quality of the training datasets. However, real-world data is not perfect and often suffers from noise. This paper gives an overview of noise filtering methods. Firstly, the types of noise and the consequences of class noise on machine learning are presented. Secondly, class noise handling methods at both the data level and the algorithm level are introduced. Then ensemble-based class noise handling methods including class noise removal, correction, and noise robust ensemble learners are presented. Finally, a summary of existing data-cleaning techniques is given.
文摘Labeling of the connected components is the key operation of the target recognition and segmentation in remote sensing images.The conventional connected-component labeling(CCL) algorithms for ordinary optical images are considered time-consuming in processing the remote sensing images because of the larger size.A dynamic run-length based CCL algorithm(Dy RLC) is proposed in this paper for the large size,big granularity sparse remote sensing image,such as space debris images and ship images.In addition,the equivalence matrix method is proposed to help design the pre-processing method to accelerate the equivalence labels resolving.The result shows our algorithm outperforms 22.86% on execution time than the other algorithms in space debris image dataset.The proposed algorithm also can be implemented on the field programming logical array(FPGA) to enable the realization of the real-time processing on-board.
文摘The concept and advantage of reconfigurable technology is introduced. A kind of processor architecture of re configurable macro processor (RMP) model based on FPGA array and DSP is put forward and has been implemented. Two image algorithms are developed: template-based automatic target recognition and zone labeling. One is estimating for motion direction in the infrared image background, another is line picking-up algorithm based on image zone labeling and phase grouping technique. It is a kind of 'hardware' function that can be called by the DSP in high-level algorithm. It is also a kind of hardware algorithm of the DSP. The results of experiments show the reconfigurable computing technology based on RMP is an ideal accelerating means to deal with the high-speed image processing tasks. High real time performance is obtained in our two applications on RMP.
基金Project(61262035) supported by the National Natural Science Foundation of ChinaProjects(GJJ12271,GJJ12742) supported by the Science and Technology Foundation of Education Department of Jiangxi Province,ChinaProject(20122BAB201033) supported by the Natural Science Foundation of Jiangxi Province,China
文摘Head-driven statistical models for natural language parsing are the most representative lexicalized syntactic parsing models, but they only utilize semantic dependency between words, and do not incorporate other semantic information such as semantic collocation and semantic category. Some improvements on this distinctive parser are presented. Firstly, "valency" is an essential semantic feature of words. Once the valency of word is determined, the collocation of the word is clear, and the sentence structure can be directly derived. Thus, a syntactic parsing model combining valence structure with semantic dependency is purposed on the base of head-driven statistical syntactic parsing models. Secondly, semantic role labeling(SRL) is very necessary for deep natural language processing. An integrated parsing approach is proposed to integrate semantic parsing into the syntactic parsing process. Experiments are conducted for the refined statistical parser. The results show that 87.12% precision and 85.04% recall are obtained, and F measure is improved by 5.68% compared with the head-driven parsing model introduced by Collins.
基金Project(61002022)supported by the National Natural Science Foundation of ChinaProject(2012M512168)supported by China Postdoctoral Science Foundation
文摘Non-rigid point matching has received more and more attention.Recently,many works have been developed to discover global relationships in the point set which is treated as an instance of a joint distribution.However,the local relationship among neighboring points is more effective under non-rigid transformations.Thus,a new algorithm taking advantage of shape context and relaxation labeling technique,called SC-RL,is proposed for non-rigid point matching.It is a strategy that joints estimation for correspondences as well as the transformation.In this work,correspondence assignment is treated as a soft-assign process in which the matching probability is updated by relaxation labeling technique with a newly defined compatibility coefficient.The compatibility coefficient is one or zero depending on whether neighboring points preserving their relative position in a local coordinate system.The comparative analysis has been performed against four state-of-the-art algorithms including SC,ICP,TPS-RPM and RPM-LNS,and the results denote that SC-RL performs better in the presence of deformations,outliers and noise.
文摘To recognize circular objects rapidly in satellite remote sensing imagery, an approach using their geometry properties is presented. The original image is segmented to be a binary one by one dimension maximum entropy threshold algorithm and the binary image is labeled with an algorithm based on recursion technique. Then, shape parameters of all labeled regions are calculated and those regions with shape parameters satisfying certain conditions are recognized as circular objects. The algorithm is described in detail, and comparison experiments with the randomized Hough transformation (RHT) are also provided. The experimental results on synthetic images and real images show that the proposed method has the merits of fast recognition rate, high recognition efficiency and the ability of anti-noise and anti-jamming. In addition, the method performs well when some circular objects are little deformed and partly misshapen.
基金supported by the National Natural Science Foundation of China (60673142)Applied Basic Research Project of Sichuan Province (2006J13-067)
文摘To improve the resource utilization ratio and shorten the recovery time of the shared path protection with differentiated reliability (SPP-DiR) algorithm, an algorithm called dynamic shared segment protection with differentiated reliability (DSSP-DiR) is proposed for survivable GMPLS networks. In the proposed algorithm, a primary path is dynamically divided into several segments according to the differentiated reliability requirements of the customers. In the SPP-DiR algorithm, the whole primary path should be protected, while in the DSSP- DiR algorithm, only partial segments on the primary path need to be protected, which can reduce more backup bandwidths than that in the SPP-DiR algorithm. Simulation results show that the DSSP-DiR algorithm achieves higher resource utilization ratio, lower protection failure probability, and shorter recovery time than the SPP-DiR algorithm.