IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0371186
(2009-02-13)
|
등록번호 |
US-8179393
(2012-05-15)
|
발명자
/ 주소 |
- Minear, Kathleen
- Pooley, Donald
- Smith, Anthony O'Neil
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
19 인용 특허 :
33 |
초록
▼
Method and system for combining a 2D image with a 3D point cloud for improved visualization of a common scene as well as interpretation of the success of the registration process. The resulting fused data contains the combined information from the original 3D point cloud and the information from the
Method and system for combining a 2D image with a 3D point cloud for improved visualization of a common scene as well as interpretation of the success of the registration process. The resulting fused data contains the combined information from the original 3D point cloud and the information from the 2D image. The original 3D point cloud data is color coded in accordance with a color map tagging process. By fusing data from different sensors, the resulting scene has several useful attributes relating to battle space awareness, target identification, change detection within a rendered scene, and determination of registration success.
대표청구항
▼
1. A method performed by a computing device for combining a 2D image with a 3D image for improved visualization of a common scene, comprising: analyzing said 2D image to identify selected content based characteristics of a plurality of first areas in said common scene;selectively assigning to each a
1. A method performed by a computing device for combining a 2D image with a 3D image for improved visualization of a common scene, comprising: analyzing said 2D image to identify selected content based characteristics of a plurality of first areas in said common scene;selectively assigning to each area of said plurality of first areas a different color map tag of a plurality of color map tags that corresponds to a content based characteristic of said area, each of said plurality of color map tags identifying a different one of a plurality of color maps;using said plurality of color map tags to assign a different color map of said plurality of color maps to each of a plurality of second areas of said 3D image which correspond to said plurality of first areas of said 2D image;respectively applying said plurality of color maps to data defining said plurality of second areas of said 3D image;forming a virtual 3D image from said 2D image by assigning a Z value to pixels in said 2D image based on a contour of a surface defined by first points of said 3D image;determining color values for second points in the virtual 3D image based on a desired color map; andcreating a fused image by overlaying said 3D image and said virtual 3D image. 2. The method according to claim 1, further comprising evaluating a performance or quality of a registration process by visually inspecting said fused image to determine if features in said common scene are properly aligned. 3. The method according to claim 1, wherein said content based characteristics are selected from the group consisting of urban content, natural content, water content, and man-made structure content. 4. The method according to claim 3, wherein said man-made structure content is selected from the group consisting of buildings, houses, roadways, and vehicles. 5. The method according to claim 1, wherein said 3D image is comprised of a plurality of points comprising a 3D point cloud where each point is defined in accordance with an X, Y, and Z coordinate axis value, and said 2D image is comprised of a plurality of pixels having a position defined exclusively in accordance with said X and Y coordinate axis values. 6. The method according to claim 5, further comprising assigning each of said plurality of color maps to one or more points having X, Y, and Z coordinate values within areas in said 3D image based on said plurality of color map tags assigned to corresponding ones of said X, Y coordinate values of said plurality of areas identified in said 2D image. 7. The method according to claim 1, further comprising removing a portion of said 3D image comprising data defining said surface prior to creating said fused image. 8. The method according to claim 5, wherein said values are assigned by: assigning to at least a first one of said plurality of pixels of said 2D image a first Z value of a corresponding one of said first points of said 3D image that has an X coordinate value and a Y coordinate value that is the same thereof; andinterpolating or estimating a second Z value for at least a second one of said plurality of pixels of said 2D image if none of said first points in said 3D image has the same X coordinate value and Y coordinate value as said one of said plurality of pixels. 9. The method according to claim 1, wherein at least one color map of said plurality of color maps is selected to mimic colors or hues that are commonly associated with said content based characteristic of the area for which the color map is used. 10. The method according to claim 1, further comprising registering said 2D image and said 3D image. 11. A system for combining a 2D image with a 3D image for improved visualization of a common scene, comprising: a computer programmed with a set of instructions for analyzing said 2D image to identify selected content based characteristics of a plurality of first areas in said common scene;selectively assigning to each area of said plurality of first areas a different color map tag of a plurality of color map tags that corresponds to a content based characteristic of said area, each of said plurality of color map tags identifying a different one of a plurality of color maps;using said plurality of color map tags to assign a different color map of said plurality of color maps to each of a plurality of second areas of said 3D image which correspond to said plurality of first areas of said 2D image;respectively applying said plurality of color maps to data defining said plurality of second areas of said 3D image;forming a virtual 3D image from said 2D image by assigning a Z value to pixels in said 2D image, each said Z value determined based on a contour of a surface defined by first points of said 3D image;determining color values for second point in the virtual 3D image based on a desired color map on a color values of a corresponding pixel in said 2D image; andcreating a fused image by overlaying said 3D image and said virtual 3D image. 12. The system according to claim 11, further comprising evaluating a performance or quality of a registration process by visually inspecting said fused image to determine if features in said common scene are properly aligned. 13. The system according to claim 11, wherein said content based characteristics are selected from the group consisting of urban content, natural content, water content, and man-made structure content. 14. The system according to claim 13, wherein said man-made structure content is selected from the group consisting of buildings, houses, roadways, and vehicles. 15. The system according to claim 11, wherein said 3D image is comprised of a plurality of points comprising a 3D point cloud where each point is defined in accordance with an X, Y, and Z coordinate axis value, and said 2D image is comprised of a plurality of pixels having a position defined exclusively in accordance with X and Y coordinate axis values. 16. The system according to claim 15, wherein said computer is programmed to assign each of said plurality of color maps to one or more points having X, Y, and Z coordinate values within areas in said 3D image based on said plurality of color map tags assigned to corresponding ones of said X, Y coordinate values of said plurality of areas identified in said 2D image. 17. The system according to claim 11, wherein said computer is programmed to remove a portion of said 3D image comprising data defining said surface prior to creating said fused image. 18. The system according to claim 15, wherein said Z values are assigned by: assigning to at least a first one of said plurality of pixels of said 2D image a first Z value of a corresponding one of said first points of said 3D image that has an X coordinate value and a Y coordinate value that is the same thereof; andinterpolating or estimating a second Z value for at least a second one of said plurality of pixels of said 2D image if none of said first data points in said 3D image has the same X coordinate value and Y coordinate value as said one of said plurality of 2D pixels. 19. The system according to claim 11, wherein at least one color map of said plurality of color maps mimics colors or hues that are commonly associated with said content based characteristic of the area for which the color map is used. 20. The system according to claim 11, wherein said computer is programmed to register said 2D image and said 3D image. 21. A method performed by a computing device for combining a 2D image with a 3D image for improved visualization of a common scene, comprising: analyzing said 2D image to identify selected content based characteristics of a plurality of first areas in said common scene;selectively assigning to each area of said plurality of first areas a different color map tag of a plurality of color map tags that correspond to a content based characteristic of said area, each of said plurality of color map tags identifying a different one of a plurality of color maps;registering a 2D image with a 3D image;using said plurality of color map tags to assign a different color map of said plurality of color maps to each of a plurality of second areas of said 3D image which correspond to said plurality of first areas of said 2D image;respectively applying said plurality of color maps to data defining said plurality of second areas of said 3D image;forming a virtual 3D image from said 2D image by assigning a Z value to pixels in said 2D image based on a contour of a surface defined by first points of said 3D image; andcreating a fused image by overlaying said 3D image and said virtual 3D image in accordance with said registration.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.