The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acqu...The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acquired images. Currently available image defogging methods are mostly suitable for environments with natural light in the daytime, but the clarity of images captured under complex lighting conditions and spatial changes in the presence of fog at night is not satisfactory. This study proposes an algorithm to remove night fog from single images based on an analysis of the statistical characteristics of images in scenes involving night fog. Color channel transfer is designed to compensate for the high attenuation channel of foggy images acquired at night. The distribution of transmittance is estimated by the deep convolutional network DehazeNet, and the spatial variation of atmospheric light is estimated in a point-by-point manner according to the maximum reflection prior to recover the clear image. The results of experiments show that the proposed method can compensate for the high attenuation channel of foggy images at night, remove the effect of glow from a multi-color and non-uniform ambient source of light, and improve the adaptability and visual effect of the removal of night fog from images compared with the conventional method.展开更多
For some reasons, engineers build their product 3D mo del according to a set of related engineering drawings. The problem is how we ca n know the 3D model is correct. The manual checking is very boring and time cons u...For some reasons, engineers build their product 3D mo del according to a set of related engineering drawings. The problem is how we ca n know the 3D model is correct. The manual checking is very boring and time cons uming, and still could not avoid mistakes. Thus, we could not confirm the model, maybe try checking again. It will effect the production preparing cycle greatly , and should be solved in a intelligent way. The difficulties are quite obvious, unlike word checking in a word processing package, the checking described above is not a comparison between same items. One is 2D drawing, the another is 3D mo del, they are not in the same dimension. So, we should make a change for compari son in the same dimension. If we can rebuild a 3D model through related 2D drawi ngs automatically, that’s great. We can not only compare two 3D models to check and correct, but also omit the manual process itself completely. Unfortunately, we can not build such a 3D model automatically right now. So only one way left: compare two 2D drawings, one is the original, the another is processed from tha t manual built one.The method is to select a drawing as a background, rotate th e 3D model and make projections, compare projection with the background automati cally to find a case which they meet each other in certain amount of error ( tolerance), otherwise alarm. This process can be repeated many times if needed t o fulfil the checking task. Also, this is a man-machine system, computer does h ard working, man keeps final decision. The project involved in CAD, VRML, patter n recognition, image capture and comparison, artificial intelligence.展开更多
基金supported by a grant from the Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology (Grant No. GZZKFJJ2020004)the National Natural Science Foundation of China (Grant Nos. 61875013 and 61827814)the Natural Science Foundation of Beijing Municipality (Grant No. Z190018)。
文摘The visible-light imaging system used in military equipment is often subjected to severe weather conditions, such as fog, haze, and smoke, under complex lighting conditions at night that significantly degrade the acquired images. Currently available image defogging methods are mostly suitable for environments with natural light in the daytime, but the clarity of images captured under complex lighting conditions and spatial changes in the presence of fog at night is not satisfactory. This study proposes an algorithm to remove night fog from single images based on an analysis of the statistical characteristics of images in scenes involving night fog. Color channel transfer is designed to compensate for the high attenuation channel of foggy images acquired at night. The distribution of transmittance is estimated by the deep convolutional network DehazeNet, and the spatial variation of atmospheric light is estimated in a point-by-point manner according to the maximum reflection prior to recover the clear image. The results of experiments show that the proposed method can compensate for the high attenuation channel of foggy images at night, remove the effect of glow from a multi-color and non-uniform ambient source of light, and improve the adaptability and visual effect of the removal of night fog from images compared with the conventional method.
文摘For some reasons, engineers build their product 3D mo del according to a set of related engineering drawings. The problem is how we ca n know the 3D model is correct. The manual checking is very boring and time cons uming, and still could not avoid mistakes. Thus, we could not confirm the model, maybe try checking again. It will effect the production preparing cycle greatly , and should be solved in a intelligent way. The difficulties are quite obvious, unlike word checking in a word processing package, the checking described above is not a comparison between same items. One is 2D drawing, the another is 3D mo del, they are not in the same dimension. So, we should make a change for compari son in the same dimension. If we can rebuild a 3D model through related 2D drawi ngs automatically, that’s great. We can not only compare two 3D models to check and correct, but also omit the manual process itself completely. Unfortunately, we can not build such a 3D model automatically right now. So only one way left: compare two 2D drawings, one is the original, the another is processed from tha t manual built one.The method is to select a drawing as a background, rotate th e 3D model and make projections, compare projection with the background automati cally to find a case which they meet each other in certain amount of error ( tolerance), otherwise alarm. This process can be repeated many times if needed t o fulfil the checking task. Also, this is a man-machine system, computer does h ard working, man keeps final decision. The project involved in CAD, VRML, patter n recognition, image capture and comparison, artificial intelligence.