Mapping images from one or more sources into an image for display
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-015/10
G09G-005/00
출원번호
UP-0379409
(2003-03-01)
등록번호
US-7619626
(2009-11-27)
발명자
/ 주소
Bernier, Kenneth L.
출원인 / 주소
The Boeing Company
대리인 / 주소
Alston & Bird LLP
인용정보
피인용 횟수 :
40인용 특허 :
21
초록▼
The present invention provides systems and methods that provide images of an environment to the viewpoint of a display. The systems and methods define a mapping surface at a distance from the image source and display that approximates the environment within the field of view of the image source. The
The present invention provides systems and methods that provide images of an environment to the viewpoint of a display. The systems and methods define a mapping surface at a distance from the image source and display that approximates the environment within the field of view of the image source. The system methods define a model that relates the different geometries of the image source, display, and mapping surface to each other. Using the model and the mapping surface, the systems and methods tile images from the image source, correlate the images to the display, and display the images. In instants where two image sources have overlapping fields of view on the mapping surface, the systems and methods overlap and stitch the images to form a mosaic image. If two overlapping image sources each have images with unique characteristics, the systems and methods fuse the images into a composite image.
대표청구항▼
What is claimed is: 1. A system for providing images of an environment to a display, said system comprising: at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an
What is claimed is: 1. A system for providing images of an environment to a display, said system comprising: at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment; and a processor in communication with each image source and said display, wherein said processor: receives a selected distance; defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of said image sources, wherein the mapping surface comprises a plurality of vertex vectors each representing a three-dimensional coordinate of a mapping space; for a selected vertex of the mapping surface within the field of display of said image sources, determines a texture vector of the image provided by said image sources that corresponds to the selected vertex of the mapping surface, and provides a collection of vectors comprising the selected vertex of the mapping surface, the texture vector of the image, and a color vector; defines a model that relates a geometry of said image sources, a geometry of said display, and a geometry of the mapping surface to each other, wherein said image sources, display and mapping surface all have different coordinate systems, and wherein said processor is configured to define the model so as to provide for transforming said image sources, said display, and the mapping surface to a different coordinate system; and maps different types of images provided by said image sources to said display using the model, wherein said image sources including image source A have respective fields of view that overlap each other on the mapping space such that said image sources provide respective images having texture vectors that correspond to the selected vertex of the mapping space, wherein respective images provided by each of said image sources have a unique characteristic, wherein said processor, for each image source, provides the selected vertex of the mapping surface and the texture vector of the respective image source such that the respective images from said image sources overlap on said display, and wherein said processor combines the respective images into a resultant image containing the unique characteristic of each respective image utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows: Display1=(ImageA/2N)*ImageA+(1-Image A/2N)*Display0 wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth. 2. A system according to claim 1, wherein at least one of said image sources and display has optical distortion associated therewith, and wherein the model defined by said processor accounts for the optical distortion. 3. A system according to claim 1, wherein for a selected coordinate of the mapping surface within the field of view of a respective image source, said processor using the model determines a coordinate of said respective image source that corresponds to the selected coordinate on the mapping surface, said processor further relates the selected mapping surface coordinate with a corresponding coordinate of the display and displays the image data associated with the determined coordinate of the respective image source at the corresponding coordinate on the display. 4. A system according to claim 1, wherein said processor comprises: a central processor; and a graphics processor for receiving the collection of vectors from said central processor and for rendering the collection of vectors as a 3D video-textured triangle. 5. A system according to claim 4, wherein at least two of said sources, said display, and said mapping space have a different coordinate system, and wherein said processor calculates transformations for transforming the vectors of said two of said sources, said display, and the mapping space to a primary coordinate system such that the vectors can be correlated. 6. A system according to claim 4, wherein the field of view of a respective image source defines an image projected on the mapping space, wherein said processor defines a tile that is smaller than the image projected on the mapping surface, wherein all texture vectors of the image within the tile are associated with a respective vertex of the mapping surface and all texture vectors projecting on the mapping surface at locations outside the tile are not associated with a vertex of the mapping space. 7. A system according to claim 4 wherein said at least two image sources have respective fields of view that overlap each other on the mapping space, such that said image sources provide respective images having texture vectors that correspond to the same vertex of the mapping space, wherein said processor, for each image source, provides the vertex of the mapping space and the texture vector of respective image source to the graphics processor for display, such that the images from said image sources overlap on said display. 8. A system according to claim 7, wherein said process defines a blending zone representing a plurality of vertices on the mapping surface in a location where the fields of view of said two image sources overlap, wherein for each vertex in the blending zone, said processor alters the intensity value of the vertex color to thereby substantially eliminate a seam between the overlapping tiles. 9. A system according to claim 7, wherein if the images from said at least two image sources have different brightness values, said processor reduces a color magnitude of the vertices of the brighter image located in the blend zone by a scale factor based on a ratio of the relative brightness of the adjacent images at the vertices. 10. A system according to claim 1, wherein said processor defines a mesh of three-dimensional points upon an inner surface of the mapping surface within the field of view of said image sources. 11. A system according to claim 10, wherein said processor maps the three-dimensional mesh points from the mapping surface to said image sources using the model, thereby defining a two-dimensional mesh in the image coordinates of said image sources. 12. A system according to claim 11, wherein said processor uses three-dimensional textured-mesh rendering techniques to create said resultant image from the viewpoint of said display using the three-dimensional mesh points on the mapping surface as vertices, the two-dimensional mesh points of said image sources as texture coordinates, and pixels of the respective images as the texture. 13. A method for providing images of an environment to a display, said method comprising: providing at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment; receiving a selected distance; defining a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of the image sources, wherein at least two image sources including image source A have respective fields of view that overlap each other on the mapping surface; defining a model that relates a geometry of the image sources, a geometry of the display, and a geometry of the mapping surface to each other, wherein the image sources, the display and the mapping surface all have different coordinate systems, and wherein defining the model comprises transforming the image sources, the display, and a mapping surface to a different coordinate system; mapping different types of images provided by the image sources to the display using the model, wherein mapping comprises combining the respective images having fields of view that overlap each other into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows: Display1=(ImageA/2N)*ImageA+(1-Image A/2N)*Display0 wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth; and displaying the resultant image upon the display in accordance with the model. 14. A method according to claim 13, wherein at least one of the image sources and the display has optical distortion associated therewith, and wherein the model defined in said defining step accounts for the optical distortion. 15. A method according to claim 13, wherein for a selected coordinate of the mapping surface within the field of view of a respective image source, said method further comprising: using the model to determine a coordinate of the respective image source that corresponds to the selected coordinate on the mapping surface; relating the selected mapping surface coordinate with a corresponding coordinate of the display; and displaying the image data associated with the determined coordinate of the respective image source at the corresponding coordinate on the display. 16. A method according to claim 13, wherein said mapping comprises defining a mesh of three-dimensional points upon an inner surface of the mapping surface within the field of view of said image sources. 17. A method according to claim 16, wherein said mapping maps the three-dimensional mesh points from the mapping surface to said image sources using the model, thereby defining a two-dimensional mesh in the image coordinates of said image sources. 18. A method according to claim 17 further comprising using three-dimensional textured-mesh rendering techniques to create the resultant image from the viewpoint of the display using the three-dimensional mesh points on the mapping surface as vertices, the two-dimensional mesh points of said image sources as texture coordinates, and pixels of the respective images as the texture. 19. A system for providing images of an environment to a display, said system comprising: at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment; and a processor in communication with said image sources and the display, wherein said processor: receives a selected distance; defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of said image sources, wherein the field of view of said image sources defines an image projected on the mapping surface at the selected distance, and wherein said processor defines a tile that encompasses only a subset of an area covered by the image projected on the mapping surface by said image sources such that other portions of the image projected on the mapping surface lie outside the tile, wherein said image sources including image source A have respective fields of view that overlap each other on the mapping surface, wherein said processor defines respective tiles for each image such that the tiles having overlapping regions, wherein said image sources provide respective images that each have at least one unique characteristic, and wherein said processor combines the respective images into a resultant image containing these unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows: Display1=(ImageA/2N)*ImageA+(1-Image A/2N)*Display0 wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth. 20. A system according to claim 19, wherein said processor defines blend zones on the mapping surface within the overlap regions and modulates the intensity of the respective images in the blend zones to hide seams between the respective images. 21. A method for providing images of an environment to a display, said method comprising: providing at least two image sources of different types including a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system, each image source having a field of view and providing an image of the environment, wherein said image sources include image source A and provide respective images that each have at least one unique characteristic; receiving a selected distance; defining a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the mapping surface approximates the environment within the field of view of said image sources, wherein the field of view of said image sources defines an image projected on the mapping surface at the selected distance, wherein said at least two image sources have respective fields of view that overlap each other on the mapping surface, wherein said defining step defines respective tiles for each respective image such that the tiles have overlapping regions; defining a tile that encompasses only a subset of an area covered by the image projected on the mapping surface by said at least two image sources such that other portions of the image projected on the mapping surface lie outside the tile; combining the respective images into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows: Display1=(ImageA/2N)*ImageA+(1-Image A/2N)*Display0 wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth; and displaying the respective image within the tile on the display. 22. A method according to claim 21 further comprising defining blending zones on the mapping surface within the overlap regions and modulating the intensity of the respective images in the blend zone to hide seams between the respective images. 23. A system for providing images of an environment to a display, said system comprising: at least two image sources having respective fields of view and providing different types of images of the environment having unique characteristics; and a processor in communication with said image sources and the display, wherein said processor receives a selected distance and defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the respective fields of view of said image sources including image source A define respective images that project on to the mapping surface at the selected distance and have adjacent regions that overlap, and wherein said processor defines blend zones on the mapping surface within the overlap regions and modulates the intensity of the respective images in the blend zones to hide seams between the respective images, wherein said processor is configured to compare, for each of a plurality of pixels within a blend zone, an intensity of a pixel of one respective image to a predefined maximum intensity to determine an intensity percentage based thereupon, and to blend the corresponding pixels of the respective images based upon the intensity percentage, wherein said processor is configured to map different types of respective images provided by said image sources to the display by combining the respective images having respective fields of view that overlap each other into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows: Display1=(ImageA/2N)*ImageA+(1-Image A/2N)*Display0 wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth. 24. A system according to claim 23, wherein for regions of each respective image located in the blend zone, said processor tapers the intensity of the respective image at a point in the respective image at an edge the blend zone to the outer edge of the respective image. 25. A method for providing images of an environment to a display, said method comprising: providing at least two image sources having respective fields of view and providing different types of images of the environment having unique characteristics; receiving a selected distance; defining a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the respective fields of view of said image sources including image source A define respective images that project on to the mapping surface at the selected distance and have adjacent regions that overlap; defining blend zones on the mapping surface within the overlap regions; mapping different types of respective images provided by said image sources to the display by combining the respective images having respective fields of view that overlap each other into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows: Display1=(ImageA/2N)*ImageA+(1-Image A/2N)*Display0 wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth; displaying the resultant image on the display; and modulating the intensity of the respective images in the blend zones to hide seams between the respective images, wherein modulating the intensity of the respective images comprises comparing, for each of a plurality of pixels within a blend zone, an intensity of a pixel of one respective image to a predefined maximum intensity, determining an intensity percentage based thereupon, and blending the corresponding pixels of the respective images based upon the intensity percentage. 26. A method according to claim 25, wherein for regions of each respective image located in the blend zone, said modulating step tapers the intensity of the image at a point in the respective image at an edge the blend zone to the outer edge of the respective image. 27. A system for providing images of an environment to a display, said system comprising: at least two image sources including image source A having respective fields of view that at least partially overlap, wherein said image sources are of different types and provide respective images that each have at least one unique characteristic, wherein said image sources include a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system; and a processor in communication with said image sources and the display, wherein said processor receives a selected distance and defines a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the respective fields of view of said image sources define respective images that project on to the mapping surface at the selected distance and have regions that overlap, and wherein said processor combines the respective images from the different types of image sources into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows: Display1=(ImageA/2N)*ImageA+(1-Image A/2N)*Display0 wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth. 28. A system according to claim 27, wherein said processor displays one of the respective images with increased intensity relative to the other respective image to thereby enhance the resultant image. 29. A system according to claim 27, wherein said processor evaluates pixels of each respective image and weight pixels based on their associated intensity such that pixels having greater intensity are enhanced in the combined image. 30. A method for providing images of an environment to a display, said method comprising: providing at least two image sources including image source A having respective fields of view that at least partially overlap, wherein said image sources are of different types and provide respective images that each have at least one unique characteristic, wherein said image sources include a first image source comprising a camera for capturing a visual image and a second image source selected from a group consisting of an infrared source, a radar source and a synthetic vision system; receiving a selected distance; defining a mapping surface that is spherical in shape and has a radius equal to the selected distance, wherein the respective fields of view of said image sources define respective images that project on to the mapping surface at the selected distance and have regions that overlap; combining the respective images from the different types of image sources into a resultant image containing the unique characteristic of each respective image by utilizing content-based fusion using a blending coefficient determined from source pixel intensity by defining a pixel value of a display pixel Display1 as follows: Display1=(ImageA/2N)*ImageA+(1-Image A/2N)*Display0 wherein Display0 is an initial display pixel value, Image A is the respective image from said image source A, and N is a pixel bit depth; and displaying the resultant image on the display. 31. A method according to claim 30, wherein said combining step displays one of the respective images with increased intensity relative to the other respective image to thereby enhance the resultant image. 32. A method according to claim 30 further comprising evaluating pixels of each respective image and weighting pixels based on their associated intensity such that pixels having greater intensity are enhanced in the combined image.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (21)
Szeliski Richard ; Shum Heung-Yeung, 3-dimensional image rotation method and apparatus for producing image mosaics.
Hale Robert A. (Ellicott City MD) Nathanson Harvey C. (Pittsburgh PA) Hazlett Joel F. (Linthicum MD), Distributed aperture imaging and tracking system.
Horiuchi Kazu (Hirakata JPX) Nishimura Kenji (Sakai JPX) Nakase Yoshimori (Kawachinagano JPX), Image displaying system for interactively changing the positions of a view vector and a viewpoint in a 3-dimensional spa.
Lee,Kujin; Kweon,In So; Kim,Howon; Kim,Junsik, Method and apparatus for omni-directional image and 3-dimensional data acquisition with data annotation and dynamic range extension method.
Miller Gavin S. P. (Mountain View CA) Chen Shenchang E. (Sunnyvale CA), Textured sphere and spherical environment map rendering using texture map double indirection.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and sensor triggered AR eyepiece interface to external devices.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and sensor triggered control of AR eyepiece applications.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Cella, Charles; Nortrup, Robert J.; Nortrup, Edward H., AR glasses with event and user action control of external applications.
Osterhout, Ralph F.; Haddick, John D.; Lohse, Robert Michael; Border, John N.; Miller, Gregory D.; Stovall, Ross W., Eyepiece with uniformly illuminated reflective display.
Miller, Gregory D.; Border, John N.; Osterhout, Ralph F., Grating in a light transmissive illumination system for see-through near-eye display glasses.
Ellsworth, Christopher C.; Baity, Sean; Johnson, Brandon; Geer, Andrew, Method apparatus system and computer program product for automated collection and correlation for tactical information.
Miller, Gregory D.; Border, John N.; Osterhout, Ralph F., Optical imperfections in a light transmissive illumination system for see-through near-eye display glasses.
Border, John N.; Osterhout, Ralph F., See-through near-eye display glasses including an auto-brightness control for the display brightness based on the brightness in the environment.
Border, John N.; Osterhout, Ralph F., See-through near-eye display glasses with a fast response photochromic film system for quick transition from dark to clear.
Border, John N.; Haddick, John D.; Osterhout, Ralph F., See-through near-eye display glasses with a light transmissive wedge shaped illumination system.
Border, John N.; Haddick, John D.; Lohse, Robert Michael; Osterhout, Ralph F., See-through near-eye display glasses with the optical assembly including absorptive polarizers or anti-reflective coatings to reduce stray light.
Samuthirapandian, Subash; Gurusamy, Saravanakumar; Raje, Anup; Wyatt, Ivan Sandy; Odgers, Rob; A, Fazurudheen, System and method for indicating a perspective cockpit field-of-view on a vertical situation display.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.