Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color...Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images.展开更多
Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imag...Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.展开更多
Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques ar...Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques are often involved in such multi-method fusion metrics so that its output would be more consistent with human visual perceptions. On the other hand, the robustness and generalization ability of these multi-method fusion metrics are questioned because of the scarce of images with mean opinion scores. In order to comprehensively validate whether or not the generalization ability of such multi-method fusion IQA metrics are satisfying, we construct a new image database which contains up to 60 reference images. The newly built image database is then used to test the generalization ability of different multi-method fusion IQA metrics. Cross database validation experiment indicates that in our new image database, the performances of all the multi-method fusion IQA metrics have no statistical significant different with some single-method IQA metrics such as FSIM and MAD. In the end, a thorough analysis is given to explain why the performance of multi-method fusion IQA framework drop significantly in cross database validation.展开更多
Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this...Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this problem,this paper proposes to combine two ingredients:(i)Three features with functions of mutual complementation are adopted to describe the images,including pyramid histogram of words(PHOW),pyramid histogram of color(PHOC)and pyramid histogram of orientated gradients(PHOG).(ii)An adaptive feature-weight adjusted image categorization algorithm based on the SVM and the decision level fusion of multiple features are employed.Experiments are carried out on the Caltech101 database,which confirms the validity of the proposed approach.The experimental results show that the classification accuracy rate of the proposed method is improved by 7%-14%higher than that of the traditional BOW methods.With full utilization of global,local and spatial information,the algorithm is much more complete and flexible to describe the feature information of the image through the multi-feature fusion and the pyramid structure composed by image spatial multi-resolution decomposition.Significant improvements to the classification accuracy are achieved as the result.展开更多
In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directl...In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics.展开更多
In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeN...In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information.展开更多
To improve the quality of the infrared image and enhance the information of the object,a dual band infrared image fusion method based on feature extraction and a novel multiple pulse coupled neural network(multi-PCNN)...To improve the quality of the infrared image and enhance the information of the object,a dual band infrared image fusion method based on feature extraction and a novel multiple pulse coupled neural network(multi-PCNN)is proposed.In this multi-PCNN fusion scheme,the auxiliary PCNN which captures the characteristics of feature image extracting from the infrared image is used to modulate the main PCNN,whose input could be original infrared image.Meanwhile,to make the PCNN fusion effect consistent with the human vision system,Laplacian energy is adopted to obtain the value of adaptive linking strength in PCNN.After that,the original dual band infrared images are reconstructed by using a weight fusion rule with the fire mapping images generated by the main PCNNs to obtain the fused image.Compared to wavelet transforms,Laplacian pyramids and traditional multi-PCNNs,fusion images based on our method have more information,rich details and clear edges.展开更多
Objective To explore the efficacy of target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation.Methods We retrospectively analyzed the clinical data and images of 79 cases(68 with Park...Objective To explore the efficacy of target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation.Methods We retrospectively analyzed the clinical data and images of 79 cases(68 with Parkinson's disease,11 with dystonia) who received preoperative CT/MRI image fusion in target positioning of subthalamic nucleus in deep brain stimulation.Deviation of implanted electrodes from the target nucleus of each patient were measured.Neurological evaluations of each patient before and after the treatment were performed and compared.Complications of the positioning and treatment were recorded.Results The mean deviations of the electrodes implanted on X,Y,and Z axis were 0.5 mm,0.6 mm,and 0.6 mm,respectively.Postoperative neurologic evaluations scores of unified Parkinson's disease rating scale(UPDRS) for Parkinson's disease and Burke-Fahn-Marsden Dystonia Rating Scale(BFMDRS) for dystonia patients improved significantly compared to the preoperative scores(P<0.001); Complications occurred in 10.1%(8/79) patients,and main side effects were dysarthria and diplopia.Conclusion Target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation has high accuracy and good clinical outcomes.展开更多
The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to impr...The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to improve visual information of the vehicle driver in low visibility conditions is put forward based on infrared and visible image fusion technique.The wavelet image confusion algorithm is adopted to decompose the image into low-frequency approximation components and high-frequency detail components.Low-frequency component contains information representing gray value differences.High-frequency component contains the detail information of the image,which is frequently represented by gray standard deviation to assess image quality.To extract feature information of low-frequency component and high-frequency component with different emphases,different fusion operators are used separately by low-frequency and high-frequency components.In the processing of low-frequency component,the fusion rule of weighted regional energy proportion is adopted to improve the brightness of the image,and the fusion rule of weighted regional proportion of standard deviation is used in all the three high-frequency components to enhance the image contrast.The experiments on image fusion of infrared and visible light demonstrate that this image fusion method can effectively improve the image brightness and contrast,and it is suitable for vision enhancement of the low-visibility images.展开更多
A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both sp...A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both spectral and spatial information simultaneously in fused image.NMF is a matrix factorization method,which can extract the local feature by choosing suitable dimension of the feature subspace.Firstly the multispectral image was represented in intensity hue saturation(IHS) system.Then the I component and panchromatic image were decomposed by NSCT.Next we used NMF to learn the feature of both multispectral and panchromatic images' low-frequency subbands,and the selection principle of the other coefficients was absolute maximum criterion.Finally the new coefficients were reconstructed to get the fused image.Experiments are carried out and the results are compared with some other methods,which show that the new method performs better in improving the spatial resolution and preserving the feature information than the other existing relative methods.展开更多
The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients ar...The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.展开更多
Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhanc...Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods.展开更多
A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the...A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the result images could be easier to be understood than any other methods.First,multi-scale wavelet transform was used to extract edge feature,and then watershed morphology was used to form multi-threshold grayscale contours.The research laid emphasis upon the homological tissue image fusion based on extended Bayesian algorithm,which fusion result images of linear weighted algorithm was used to compare with the ones of extended Bayesian algorithm.The final fusion images are shown in Fig 5.The final image evaluation was made by information entropy,information correlativity and statistics methods.It is indicated that this method has more advantages for clinical application.展开更多
The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fu...The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fusion proposed in this paper can solve this problem in a certain extent.The windowing rules were put forward and the applicable scope for the windowing fusion and the calculation method for the maximum windowing area were determined.And,the results of the windowing fusion were analyzed,verified and compared to confirm the feasibility of this technology.展开更多
In our study, support vector value contourlet transform is constructed by using support vector regression model and directional filter banks. The transform is then used to decompose source images at multi-scale, multi...In our study, support vector value contourlet transform is constructed by using support vector regression model and directional filter banks. The transform is then used to decompose source images at multi-scale, multi-direction and multi-resolution. After that, the super-resolved multi-spectral image is reconstructed by utilizing the strong learning ability of support vector regression and the correlation between multi-spectral image and panchromatic image. Finally, the super-resolved multi- spectral image and the panchromatic image are fused based on regions at different levels. Our experi- ments show that, the learning method based on support vector regression can improve the effect of super-resolution of multi-spectral image. The fused image preserves both high space resolution and spectrum information of multi-spectral image.展开更多
The advantages and disadvantages of two existing methods for explosive field visualization are analyzed in this paper. And a new method based on image fusion is proposed to integrate their complementary advantages. Wi...The advantages and disadvantages of two existing methods for explosive field visualization are analyzed in this paper. And a new method based on image fusion is proposed to integrate their complementary advantages. With the method, two source images built by equal mapping and modulus mapping are individually decomposed into two Gauss-Laplacian pyramid sequences. Then, the two individual sequences are used to make a composite one according to the process of fusion. Finally, a new image is reconstructed from the composite sequence. Experimental results show that the new images integrate the advantages of sources, effectively improve the visualization, and disclose more information about explosive field.展开更多
Based on the characteristics that human eyes are sensitive to brightness and color, the lightness information of visible image and degree of linear polarization and polarization angle were fused in hue-saturation- va...Based on the characteristics that human eyes are sensitive to brightness and color, the lightness information of visible image and degree of linear polarization and polarization angle were fused in hue-saturation- value(HSV) space. To meet the observation of human eyes, hue adjustment based on color transfer was carried out to the fused image and hue was adjusted by polynomial fitting method. Hue adjustment method was improved considering the complicated real mapping relationship between hue gray scale of fused image and reference template image. The result shows that the color fusion method presented in this paper is superior to the traditional pseudo-color method and it is helpful to recognize the target from the environment correctly. The fusion result can reflect the difference of object's polarization characteristic, and get a natural fused image effect.展开更多
The preliminary studies of the multimodality image registration and fusion were performed using an image fusion software and a picture archiving and communication system (PACS) to explore the methodology. Original ima...The preliminary studies of the multimodality image registration and fusion were performed using an image fusion software and a picture archiving and communication system (PACS) to explore the methodology. Original image voluminal data were acquired with a CT scanner, MR and dual-head coincidence SPECT, respectively. The data sets from all imaging devices were queried, retrieved, transferred and accessed via DICOM PACS. The image fusion was performed at the SPECT ICON work-station, where the MIM (Medical Image Merge) fusion software was installed. The images were created by reslicing original volume on the fly. The image volumes were aligned by translation and rotation of these view ports with respect to the original volume orientation. The transparency factor and contrast were adjusted in order that both volumes can be visualized in the merged images. The image volume data of CT, MR and nuclear medicine were transferred, accessed and loaded via PACS successfully. The perfect fused images of chest CT/18F-FDG and brain MR/SPECT were obtained. These results showed that image fusion technique using PACS was feasible and practical. Further experimentation and larger validation studies were needed to explore the full potential of the clinical use.展开更多
Objective. To compare and match metabolic images of PET with anatomic images of CT and MRI. Methods. The CT or MRI images of the patients were obtained through a photo scanner, and then transferred to the remote works...Objective. To compare and match metabolic images of PET with anatomic images of CT and MRI. Methods. The CT or MRI images of the patients were obtained through a photo scanner, and then transferred to the remote workstation of PET scanner with a floppy disk. A fusion method was developed to match the 2- dimensional CT or MRI slices with the correlative slices of 3- dimensional volume PET images. Results. Twenty- nine metabolically changed foci were accurately localized in 21 epilepsy patients’ MRI images, while MRI alone had only 6 true positive findings. In 53 cancer or suspicious cancer patients, 53 positive lesions detected by PET were compared and matched with the corresponding lesions in CT or MRI images, in which 10 lesions were missed. On the other hand, 23 lesions detected from the patients’ CT or MRI images were negative or with low uptake in the PET images, and they were finally proved as benign. Conclusions. Comparing and matching metabolic images with anatomic images helped obtain a full understanding about the lesion and its peripheral structures. The fusion method was simple, practical and useful for localizing metabolically changed lesions.展开更多
Two key challenges raised by a product images classification system are classification precision and classification time. In some categories, classification precision of the latest techniques, in the product images cl...Two key challenges raised by a product images classification system are classification precision and classification time. In some categories, classification precision of the latest techniques, in the product images classification system, is still low. In this paper, we propose a local texture descriptor termed fan refined local binary pattern, which captures more detailed information by integrating the spatial distribution into the local binary pattern feature. We compare our approach with different methods on a subset of product images on Amazon/e Bay and parts of PI100 and experimental results have demonstrated that our proposed approach is superior to the current existing methods. The highest classification precision is increased by 21% and the average classification time is reduced by 2/3.展开更多
基金supported by the national key research and development program (No.2020YFB1806608)Jiangsu natural science foundation for distinguished young scholars (No.BK20220054)。
文摘Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images.
文摘Aim To fuse the fluorescence image and transmission image of a cell into a single image containing more information than any of the individual image. Methods Image fusion technology was applied to biological cell imaging processing. It could match the images and improve the confidence and spatial resolution of the images. Using two algorithms, double thresholds algorithm and denoising algorithm based on wavelet transform,the fluorescence image and transmission image of a Cell were merged into a composite image. Results and Conclusion The position of fluorescence and the structure of cell can be displyed in the composite image. The signal-to-noise ratio of the exultant image is improved to a large extent. The algorithms are not only useful to investigate the fluorescence and transmission images, but also suitable to observing two or more fluoascent label proes in a single cell.
基金supported by “the Fundamental Research Funds for the Central Universities” No.2018CUCTJ081
文摘Considering that there is no single full reference image quality assessment method that could give the best performance in all situations, some multi-method fusion metrics were proposed. Machine learning techniques are often involved in such multi-method fusion metrics so that its output would be more consistent with human visual perceptions. On the other hand, the robustness and generalization ability of these multi-method fusion metrics are questioned because of the scarce of images with mean opinion scores. In order to comprehensively validate whether or not the generalization ability of such multi-method fusion IQA metrics are satisfying, we construct a new image database which contains up to 60 reference images. The newly built image database is then used to test the generalization ability of different multi-method fusion IQA metrics. Cross database validation experiment indicates that in our new image database, the performances of all the multi-method fusion IQA metrics have no statistical significant different with some single-method IQA metrics such as FSIM and MAD. In the end, a thorough analysis is given to explain why the performance of multi-method fusion IQA framework drop significantly in cross database validation.
基金Supported by Foundation for Innovative Research Groups of the National Natural Science Foundation of China(61321002)Projects of Major International(Regional)Jiont Research Program NSFC(61120106010)+1 种基金Beijing Education Committee Cooperation Building Foundation ProjectProgram for Changjiang Scholars and Innovative Research Team in University(IRT1208)
文摘Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this problem,this paper proposes to combine two ingredients:(i)Three features with functions of mutual complementation are adopted to describe the images,including pyramid histogram of words(PHOW),pyramid histogram of color(PHOC)and pyramid histogram of orientated gradients(PHOG).(ii)An adaptive feature-weight adjusted image categorization algorithm based on the SVM and the decision level fusion of multiple features are employed.Experiments are carried out on the Caltech101 database,which confirms the validity of the proposed approach.The experimental results show that the classification accuracy rate of the proposed method is improved by 7%-14%higher than that of the traditional BOW methods.With full utilization of global,local and spatial information,the algorithm is much more complete and flexible to describe the feature information of the image through the multi-feature fusion and the pyramid structure composed by image spatial multi-resolution decomposition.Significant improvements to the classification accuracy are achieved as the result.
基金This work was supported by the National Natural Science Foundation of China(No.11775107)the Key Projects of Education Department of Hunan Province of China(No.16A184).
文摘In the process of in situ leaching of uranium,the microstructure controls and influences the flow distribution,percolation characteristics,and reaction mechanism of lixivium in the pores of reservoir rocks and directly affects the leaching of useful components.In this study,the pore throat,pore size distribution,and mineral composition of low-permeability uranium-bearing sandstone were quantitatively analyzed by high pressure mercury injection,nuclear magnetic resonance,X-ray diffraction,and wavelength-dispersive X-ray fluorescence.The distribution characteristics of pores and minerals in the samples were qualitatively analyzed using energy-dispersive scanning electron microscopy and multi-resolution CT images.Image registration with the landmarks algorithm provided by FEI Avizo was used to accurately match the CT images with different resolutions.The multi-scale and multi-mineral digital core model of low-permeability uranium-bearing sandstone is reconstructed through pore segmentation and mineral segmentation of fusion core scanning images.The results show that the pore structure of low-permeability uranium-bearing sandstone is complex and has multi-scale and multi-crossing characteristics.The intergranular pores determine the main seepage channel in the pore space,and the secondary pores have poor connectivity with other pores.Pyrite and coffinite are isolated from the connected pores and surrounded by a large number of clay minerals and ankerite cements,which increases the difficulty of uranium leaching.Clays and a large amount of ankerite cement are filled in the primary and secondary pores and pore throats of the low-permeability uraniumbearing sandstone,which significantly reduces the porosity of the movable fluid and results in low overall permeability of the cores.The multi-scale and multi-mineral digital core proposed in this study provides a basis for characterizing macroscopic and microscopic pore-throat structures and mineral distributions of low-permeability uranium-bearing sandstone and can better understand the seepage characteristics.
基金supported by the National Natural Science Foundation of China(No.61301211)the China Scholarship Council(No.201906835017)
文摘In order to improve the detail preservation and target information integrity of different sensor fusion images,an image fusion method of different sensors based on non-subsampling contourlet transform(NSCT)and GoogLeNet neural network model is proposed. First,the different sensors images,i. e.,infrared and visible images,are transformed by NSCT to obtain a low frequency sub-band and a series of high frequency sub-bands respectively.Then,the high frequency sub-bands are fused with the max regional energy selection strategy,the low frequency subbands are input into GoogLeNet neural network model to extract feature maps,and the fusion weight matrices are adaptively calculated from the feature maps. Next,the fused low frequency sub-band is obtained with weighted summation. Finally,the fused image is obtained by inverse NSCT. The experimental results demonstrate that the proposed method improves the image visual effect and achieves better performance in both edge retention and mutual information.
基金Supported by the National Natural Science Foundation of China(60905012,60572058)
文摘To improve the quality of the infrared image and enhance the information of the object,a dual band infrared image fusion method based on feature extraction and a novel multiple pulse coupled neural network(multi-PCNN)is proposed.In this multi-PCNN fusion scheme,the auxiliary PCNN which captures the characteristics of feature image extracting from the infrared image is used to modulate the main PCNN,whose input could be original infrared image.Meanwhile,to make the PCNN fusion effect consistent with the human vision system,Laplacian energy is adopted to obtain the value of adaptive linking strength in PCNN.After that,the original dual band infrared images are reconstructed by using a weight fusion rule with the fire mapping images generated by the main PCNNs to obtain the fused image.Compared to wavelet transforms,Laplacian pyramids and traditional multi-PCNNs,fusion images based on our method have more information,rich details and clear edges.
文摘Objective To explore the efficacy of target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation.Methods We retrospectively analyzed the clinical data and images of 79 cases(68 with Parkinson's disease,11 with dystonia) who received preoperative CT/MRI image fusion in target positioning of subthalamic nucleus in deep brain stimulation.Deviation of implanted electrodes from the target nucleus of each patient were measured.Neurological evaluations of each patient before and after the treatment were performed and compared.Complications of the positioning and treatment were recorded.Results The mean deviations of the electrodes implanted on X,Y,and Z axis were 0.5 mm,0.6 mm,and 0.6 mm,respectively.Postoperative neurologic evaluations scores of unified Parkinson's disease rating scale(UPDRS) for Parkinson's disease and Burke-Fahn-Marsden Dystonia Rating Scale(BFMDRS) for dystonia patients improved significantly compared to the preoperative scores(P<0.001); Complications occurred in 10.1%(8/79) patients,and main side effects were dysarthria and diplopia.Conclusion Target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation has high accuracy and good clinical outcomes.
基金the Science and Technology Development Program of Beijing Municipal Commission of Education (No.KM201010011002)the National College Students'Scientific Research and Entrepreneurial Action Plan(SJ201401011)
文摘The rise of urban traffic flow highlights the growing importance of traffic safety.In order to reduce the occurrence rate of traffic accidents,and improve front vision information of vehicle drivers,the method to improve visual information of the vehicle driver in low visibility conditions is put forward based on infrared and visible image fusion technique.The wavelet image confusion algorithm is adopted to decompose the image into low-frequency approximation components and high-frequency detail components.Low-frequency component contains information representing gray value differences.High-frequency component contains the detail information of the image,which is frequently represented by gray standard deviation to assess image quality.To extract feature information of low-frequency component and high-frequency component with different emphases,different fusion operators are used separately by low-frequency and high-frequency components.In the processing of low-frequency component,the fusion rule of weighted regional energy proportion is adopted to improve the brightness of the image,and the fusion rule of weighted regional proportion of standard deviation is used in all the three high-frequency components to enhance the image contrast.The experiments on image fusion of infrared and visible light demonstrate that this image fusion method can effectively improve the image brightness and contrast,and it is suitable for vision enhancement of the low-visibility images.
基金Supported by the National Natural Science Foundation of China(60872065)
文摘A novel fusion method of multispectral image and panchromatic image based on nonsubsampled contourlet transform(NSCT) and non-negative matrix factorization(NMF) is presented,the aim of which is to preserve both spectral and spatial information simultaneously in fused image.NMF is a matrix factorization method,which can extract the local feature by choosing suitable dimension of the feature subspace.Firstly the multispectral image was represented in intensity hue saturation(IHS) system.Then the I component and panchromatic image were decomposed by NSCT.Next we used NMF to learn the feature of both multispectral and panchromatic images' low-frequency subbands,and the selection principle of the other coefficients was absolute maximum criterion.Finally the new coefficients were reconstructed to get the fused image.Experiments are carried out and the results are compared with some other methods,which show that the new method performs better in improving the spatial resolution and preserving the feature information than the other existing relative methods.
基金Project supported by the National Natural Science Foundation of China(Grant No.61402368)Aerospace Support Fund,China(Grant No.2017-HT-XGD)Aerospace Science and Technology Innovation Foundation,China(Grant No.2017 ZD 53047)
文摘The high-frequency components in the traditional multi-scale transform method are approximately sparse, which can represent different information of the details. But in the low-frequency component, the coefficients around the zero value are very few, so we cannot sparsely represent low-frequency image information. The low-frequency component contains the main energy of the image and depicts the profile of the image. Direct fusion of the low-frequency component will not be conducive to obtain highly accurate fusion result. Therefore, this paper presents an infrared and visible image fusion method combining the multi-scale and top-hat transforms. On one hand, the new top-hat-transform can effectively extract the salient features of the low-frequency component. On the other hand, the multi-scale transform can extract highfrequency detailed information in multiple scales and from diverse directions. The combination of the two methods is conducive to the acquisition of more characteristics and more accurate fusion results. Among them, for the low-frequency component, a new type of top-hat transform is used to extract low-frequency features, and then different fusion rules are applied to fuse the low-frequency features and low-frequency background; for high-frequency components, the product of characteristics method is used to integrate the detailed information in high-frequency. Experimental results show that the proposed algorithm can obtain more detailed information and clearer infrared target fusion results than the traditional multiscale transform methods. Compared with the state-of-the-art fusion methods based on sparse representation, the proposed algorithm is simple and efficacious, and the time consumption is significantly reduced.
基金supported by the China Postdoctoral Science Foundation Funded Project(No.2021M690385)the National Natural Science Foundation of China(No.62101045).
文摘Infrared-visible image fusion plays an important role in multi-source data fusion,which has the advantage of integrating useful information from multi-source sensors.However,there are still challenges in target enhancement and visual improvement.To deal with these problems,a sub-regional infrared-visible image fusion method(SRF)is proposed.First,morphology and threshold segmentation is applied to extract targets interested in infrared images.Second,the infrared back-ground is reconstructed based on extracted targets and the visible image.Finally,target and back-ground regions are fused using a multi-scale transform.Experimental results are obtained using public data for comparison and evaluation,which demonstrate that the proposed SRF has poten-tial benefits over other methods.
基金Supported by the National Science Foundation of China(No.30370403 )
文摘A homological multi-information image fusion method was introduced for recognition of the gastric tumor pathological tissue images.The main purpose is that fewer procedures are used to provide more information and the result images could be easier to be understood than any other methods.First,multi-scale wavelet transform was used to extract edge feature,and then watershed morphology was used to form multi-threshold grayscale contours.The research laid emphasis upon the homological tissue image fusion based on extended Bayesian algorithm,which fusion result images of linear weighted algorithm was used to compare with the ones of extended Bayesian algorithm.The final fusion images are shown in Fig 5.The final image evaluation was made by information entropy,information correlativity and statistics methods.It is indicated that this method has more advantages for clinical application.
文摘The speed and quality of the image fusion always restrain each other.The real-time image fusion is one of the problems which needs to be studied and solved urgently.The windowing processing technology for the image fusion proposed in this paper can solve this problem in a certain extent.The windowing rules were put forward and the applicable scope for the windowing fusion and the calculation method for the maximum windowing area were determined.And,the results of the windowing fusion were analyzed,verified and compared to confirm the feasibility of this technology.
基金Supported by the National Natural Science Foundation of China(61172127)Key Research Project of Education Department of Anhui Province(KJ2010A021)
文摘In our study, support vector value contourlet transform is constructed by using support vector regression model and directional filter banks. The transform is then used to decompose source images at multi-scale, multi-direction and multi-resolution. After that, the super-resolved multi-spectral image is reconstructed by utilizing the strong learning ability of support vector regression and the correlation between multi-spectral image and panchromatic image. Finally, the super-resolved multi- spectral image and the panchromatic image are fused based on regions at different levels. Our experi- ments show that, the learning method based on support vector regression can improve the effect of super-resolution of multi-spectral image. The fused image preserves both high space resolution and spectrum information of multi-spectral image.
基金Sponsored by the National Natural Science Foundation of China(10625208)the Basic Research Foundation of Beijing Institute of Technology(20061242005)the Foundation of State Key Laboratory of Explosion Science and Technology(ZDKT08-02)
文摘The advantages and disadvantages of two existing methods for explosive field visualization are analyzed in this paper. And a new method based on image fusion is proposed to integrate their complementary advantages. With the method, two source images built by equal mapping and modulus mapping are individually decomposed into two Gauss-Laplacian pyramid sequences. Then, the two individual sequences are used to make a composite one according to the process of fusion. Finally, a new image is reconstructed from the composite sequence. Experimental results show that the new images integrate the advantages of sources, effectively improve the visualization, and disclose more information about explosive field.
基金Sponsored by the National High Technology Research and Development Program of China ("863"Program) (2006AA09Z207)
文摘Based on the characteristics that human eyes are sensitive to brightness and color, the lightness information of visible image and degree of linear polarization and polarization angle were fused in hue-saturation- value(HSV) space. To meet the observation of human eyes, hue adjustment based on color transfer was carried out to the fused image and hue was adjusted by polynomial fitting method. Hue adjustment method was improved considering the complicated real mapping relationship between hue gray scale of fused image and reference template image. The result shows that the color fusion method presented in this paper is superior to the traditional pseudo-color method and it is helpful to recognize the target from the environment correctly. The fusion result can reflect the difference of object's polarization characteristic, and get a natural fused image effect.
文摘The preliminary studies of the multimodality image registration and fusion were performed using an image fusion software and a picture archiving and communication system (PACS) to explore the methodology. Original image voluminal data were acquired with a CT scanner, MR and dual-head coincidence SPECT, respectively. The data sets from all imaging devices were queried, retrieved, transferred and accessed via DICOM PACS. The image fusion was performed at the SPECT ICON work-station, where the MIM (Medical Image Merge) fusion software was installed. The images were created by reslicing original volume on the fly. The image volumes were aligned by translation and rotation of these view ports with respect to the original volume orientation. The transparency factor and contrast were adjusted in order that both volumes can be visualized in the merged images. The image volume data of CT, MR and nuclear medicine were transferred, accessed and loaded via PACS successfully. The perfect fused images of chest CT/18F-FDG and brain MR/SPECT were obtained. These results showed that image fusion technique using PACS was feasible and practical. Further experimentation and larger validation studies were needed to explore the full potential of the clinical use.
文摘Objective. To compare and match metabolic images of PET with anatomic images of CT and MRI. Methods. The CT or MRI images of the patients were obtained through a photo scanner, and then transferred to the remote workstation of PET scanner with a floppy disk. A fusion method was developed to match the 2- dimensional CT or MRI slices with the correlative slices of 3- dimensional volume PET images. Results. Twenty- nine metabolically changed foci were accurately localized in 21 epilepsy patients’ MRI images, while MRI alone had only 6 true positive findings. In 53 cancer or suspicious cancer patients, 53 positive lesions detected by PET were compared and matched with the corresponding lesions in CT or MRI images, in which 10 lesions were missed. On the other hand, 23 lesions detected from the patients’ CT or MRI images were negative or with low uptake in the PET images, and they were finally proved as benign. Conclusions. Comparing and matching metabolic images with anatomic images helped obtain a full understanding about the lesion and its peripheral structures. The fusion method was simple, practical and useful for localizing metabolically changed lesions.
基金Supported by the National Natural Science Foundation of China(60802061, 11426087) Supported by Key Project of Science and Technology of the Education Department Henan Province(14A120009)+1 种基金 Supported by the Program of Henan Province Young Scholar(2013GGJS-027) Supported by the Research Foundation of Henan University(2013YBZR016)
文摘Two key challenges raised by a product images classification system are classification precision and classification time. In some categories, classification precision of the latest techniques, in the product images classification system, is still low. In this paper, we propose a local texture descriptor termed fan refined local binary pattern, which captures more detailed information by integrating the spatial distribution into the local binary pattern feature. We compare our approach with different methods on a subset of product images on Amazon/e Bay and parts of PI100 and experimental results have demonstrated that our proposed approach is superior to the current existing methods. The highest classification precision is increased by 21% and the average classification time is reduced by 2/3.