Methods, and apparatus implementing methods, including computer program products, for merging images of segments of a view. Methods include: receiving, from a network, a first image representing a first segment of the view and a second image representing a second segment of the view; determining th
Methods, and apparatus implementing methods, including computer program products, for merging images of segments of a view. Methods include: receiving, from a network, a first image representing a first segment of the view and a second image representing a second segment of the view; determining the position of the second segment of the view relative to the first segment of the view; blending the first image with the second image based on the determined position of the second segment relative to the first segment to form a panoramic image of the view; and transmitting the panoramic image over the network.
대표청구항▼
What is claimed is: 1. A method of merging images of segments of a view, comprising: receiving a first image representing a first segment of the view and a second image representing a second segment of the view, the images being received from a remote location over a network; determining the posit
What is claimed is: 1. A method of merging images of segments of a view, comprising: receiving a first image representing a first segment of the view and a second image representing a second segment of the view, the images being received from a remote location over a network; determining the position of the second segment of the view relative to the first segment of the view without the aid of positioning information provided by a human operator; blending the first image with the second image based solely on the content of the images and the determined position of the second segment relative to the first segment to merge the first image and the second image into a panoramic image of the view; and transmitting the panoramic image over the network. 2. The method of claim 1 further comprising: determining whether the second image overlaps the first image based on the position of the second segment relative to the first segment, wherein the blending the first image and the second image is only performed when the second image overlaps the first image. 3. The method of claim 1 further comprising: correcting perspective distortion in the second image relative to the first image prior to blending the first image with the second image. 4. The method of claim 1 further comprising: prior to blending the set of images: determining which of the images is a central one and which are peripheral images; and using the central image as an initial reference image in correcting perspective distortion in peripheral images. 5. The method of claim 4 further comprising: determining what pair-wise overlap areas exist between the central image and each of the peripheral images; and selecting as the first peripheral image to have perspective distortion corrected a peripheral image having a maximum pair-wise overlap area with the central image relative to the other peripheral images. 6. The method of claim 1 further comprising: receiving the images from a remote location over a network; and transmitting the panoramic image over the network. 7. A method of merging a set of images, each image representing a corresponding segment of a view, the set including a first image representing a first segment of the view, a second image representing a second segment of the view, and a third image representing a third segment of the view, where the third segment of the view overlaps both the first segment and the second segment of the view, the method comprising; determining a first relative position of the third segment relative to the first segment of the view by processing the content of the third image and the first image; determining a first overlap area of the first image and the third image based on the determined first relative position; determining a second relative position of the third segment relative to the second segment of the view by processing the content of the third image and the second image; determining a second overlap area of the second image and the third image based on the determined second relative position; and if the first overlap area is greater than the second overlap area, offsetting the position of the third image relative to the first image and the second image based on the determined first relative position; otherwise, offsetting the position of the third image relative to the first image and the second image based on the determined second relative position. 8. The method of claim 7 further comprising: correcting perspective distortion in at least one of the set of images prior to blending the set of images. 9. The method of claim 7 further comprising: determining which of the images is a central one and which are peripheral images; and using the central image as an initial reference image in correcting perspective distortion in peripheral images. 10. The method of claim 9 further comprising: determining what pair-wise overlap areas exist between the central image and each of the peripheral images; and selecting as the first peripheral image to have perspective distortion corrected a peripheral image having a maximum pair-wise overlap area with the central image relative to the other peripheral images. 11. The method of claim 10 further comprising: prior to blending the set of images: determining a first overlap area between a second one of the peripheral images and the central one of the images; determining a second overlap area between the second one of the peripheral images and the first peripheral one of the images; if the first overlap area is greater than the second overlap area, correcting perspective distortion in the second one of the peripheral images relative to the central one of the images. 12. The method of claim 11 further comprising: prior to blending the set of images: if the first overlap area is less than the second overlap area, correcting perspective distortion in the second one of the peripheral images relative to the first peripheral one of the images. 13. The method of claim 7, further comprising blending the third image with the first and second image, wherein the blending includes: dividing the third image into a first portion and a second portion, based on the first relative position; and compositing the first portion of the third image on the first image at the first position to produce a composite image, the compositing causing the first portion to mask out a part of the first image. 14. The method of claim 13 wherein blending the third image with the first and second image further includes: dividing the second image into a third portion and a second portion, based on a relative position of the second segment of the view relative to the first segment of the view; dividing the third portion into a fifth portion and a sixth portion, based on the second relative position; and compositing the fifth portion of the third image on the composite image based on the second relative position to form the panoramic image, the compositing of the fifth portion causing the fifth portion to mask out a part of the composite image. 15. An article comprising a computer-readable medium on which are tangibly stored computer-executable instructions for merging images of segments of a view, the stored instructions being operable to cause a computer to: receive a first image representing a first segment of the view and a second image representing a second segment of the view, the images being received from a remote location over a network; determine the position of the second segment of the view relative to the first segment of the view without the aid of positioning information provided by a human operator; blend the first image with the second image based solely on the content of the images and the determined position of the second segment relative to the first segment to merge the first image and the second image into a panoramic image of the view; and transmit the panoramic image over the network. 16. The article of claim 15 wherein the instructions that determine the position and blend the first and second images operate without positioning information from a human operator. 17. The article of claim 15 wherein the stored instructions further comprise instructions operable to cause the computer to: determine whether the second image overlaps the first image based on the position of the second segment relative to the first segment, wherein blending the first image and the second image is only performed when the second image overlaps the first image. 18. The article of claim 15 wherein the stored instructions further comprise instructions operable to cause the computer to: correct perspective distortion in the second image relative to the first image prior to blending the first image with the second image. 19. The article of claim 15 wherein the stored instructions further comprise instructions operable to cause the computer to: receive the images from a remote location over a network; and transmit the panoramic image over the network. 20. An article comprising a computer-readable medium which stores computer-executable instructions for merging a set of images, each image representing a corresponding segment of a view, the set including a first image representing a first segment of the view, a second image representing a second segment of the view, and a third image representing a third segment of the view, where the third segment of the view overlaps both the first segment and the second segment of the view, the instructions being operable to cause a computer to: determine a first relative position of the third segment relative to the first segment of the view by processing the content of the third image and the first image; determine a first overlap area of the first image and the third image based on the determined first relative position; determine a second relative position of the third segment relative to the second segment of the view by processing the content of third image and the second image; determine a second overlap area of the second image and third image based on the determined second relative position; and if the first overlap area is greater than the second overlap area, offset the position of the third image relative to the first image and the second image based on the determined first relative position; otherwise, offset the third image relative to the first image and the second image based on the determined second relative position. 21. The article of claim 20 wherein the stored instructions further comprise instructions operable to cause the computer to: blend the set of images; and correct perspective distortion in at least one of the set of images prior to blending the set of images. 22. The article of claim 21 wherein the stored instructions further comprise instructions operable to cause the computer to: determine which of the images is a central one and which are peripheral images; and use the central image as an initial reference image in correcting perspective distortion in peripheral images. 23. The article of claim 22 wherein the stored instructions further comprise instructions operable to cause the computer to: determine what pair-wise overlap areas exist between the central image and each of the peripheral images; and select as the first peripheral image to be corrected for perspective distortion a peripheral image having a maximum pair-wise overlap area with the central image relative to the other peripheral images. 24. The article of claim 23 wherein the stored instructions further comprise instructions operable to cause the computer to: prior to blending the set of images: determine a first overlap area between a second one of the peripheral images and the central one of the images; determine a second overlap area between the second one of the peripheral images and the first peripheral one of the images; and if the first overlap area is greater than the second, overlap area, correct perspective distortion in the second one of the peripheral images relative to the central one of the images. 25. The article of claim 24 wherein the stored instructions further comprise instructions operable to cause the computer to: prior to blending the set of images: if the first overlap area is less than the second overlap area, correct perspective distortion in the second one of the peripheral images relative to the first peripheral one of the images. 26. The article of claim 20 wherein the stored instructions further comprise instructions operable to cause the computer to blend the third image with the first and second image, wherein the blending includes: dividing the third image into a first portion and a second portion, based on the first position; and compositing the first portion of the third image on the first image at the first position to produce a composite image, the compositing causing the first portion to mask out a part of the first image. 27. The article of claim 26 wherein blending the third image with the first and second image further includes: dividing the second image into a third portion and a second portion, based on a relative position of the second segment of the view relative to the first segment of the view; dividing the third portion into a fifth portion and a sixth portion, based on the second relative position; and compositing the fifth portion of the third image on the composite image based on the second relative position to form the panoramic image, the compositing of the fifth portion causing the fifth portion to mask out a part of the composite image.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (39)
Szeliski Richard ; Shum Heung-Yeung, Block adjustment method and apparatus for construction of image mosaics.
Tatsushi Katayama JP; Hideo Takiguchi JP; Kotaro Yano JP; Kenji Hatori JP, Image combining apparatus using a combining algorithm selected based on an image sensing condition corresponding to each stored image.
Szeliski Richard ; Shum Heung-Yeung, Image mosaic construction system and apparatus with patch-based alignment, global block adjustment and pair-wise motion-based local warping.
Edward Driscoll, Jr. ; Howard Morrow ; Alan J. Steinhauer ; Willard Curtis Lomax, Method and apparatus for a panoramic camera to capture a 360 degree image.
Keh-shin Fu Cheng ; Keeranoor G. Kumar ; James Sargent Lipscomb ; Jai Prakash Menon ; Marc Hubert Willebeek-LeMair, Method and apparatus for displaying panoramas with streaming video.
Driscoll ; Jr. Edward ; Morrow Howard ; Steinhauer Alan J. ; Lomax Willard Curtis, Method and apparatus for electronically distributing images from a panoptic camera system.
Herman ; deceased Joshua Randy ; Bergen James Russell ; Peleg Shmuel,ILX ; Paragano Vincent ; Dixon Douglas F. ; Burt Peter J. ; Sawhney Harpreet ; Gendel Gary A. ; Kumar Rakesh ; Brill Michael H., Method and apparatus for mosaic image construction.
Hsu Stephen Charles ; Kumar Rakesh ; Sawhney Harpreet Singh ; Bergen James R. ; Dixon Doug ; Paragano Vince ; Gendel Gary, Method and apparatus for performing local to global multiframe alignment to construct mosaic images.
Kumar Rakesh ; Hanna Keith James ; Bergen James R. ; Anandan Padmanabhan ; Irani Michal, Method and system for image combination using a parallax-based technique.
Bender Walter R. ; Teodosio Laura A., Method of creating a high resolution still image using a plurality of images and apparatus for practice of the method.
Lukacs Michael Edward (25 Wanamassa Point Rd. Ocean Township NJ 07712), Real time video conferencing system and method with multilayer keying of multiple video images.
Jones, Boland T.; Frohwein, Robert J.; Guthrie, David Michael; Stewart, Peter, Location aware conferencing with graphical representations that enable licensing and advertising.
Zhou,Hui; Wong,Alexander Sheung Lai, Method and apparatus for automatically estimating the layout of a sequentially ordered series of frames to be used to form a panorama.
Chen, Simon; Chien, Jen-Chan; Jin, Hailin, Method and apparatus for matching image metadata to a profile database to determine image processing parameters.
Kimchi, Gur; Dekate, Amit; Kuppusamy, Ashok; Lombardi, Steve; Schwartz, Joseph; Lawler, Stephen L.; Gounares, Alexander G.; Endres, Raymond E., Obtaining and displaying virtual earth images.
Kimchi, Gur; Dekate, Amit; Kuppusamy, Ashok; Lombardi, Steve; Schwartz, Joseph; Lawler, Stephen L.; Gounares, Alexander G.; Endres, Raymond E., Obtaining and displaying virtual earth images.
Kimchi, Gur; Dekate, Amit; Kuppusamy, Ashok; Lombardi, Steve; Schwartz, Joseph; Lawler, Stephen L.; Gounares, Alexander G.; Endres, Raymond E., Obtaining and displaying virtual earth images.
Jones, Boland T.; Guthrie, David Michael; Santoro, Nicole C.; Mijatovic, Vladmir; Leigh, Randolph J.; Frohwein, Robert J.; Schaefer, Laurence; Martin, J Douglas, Record and playback in a conference.
Arcas, Blaise Aguera y; Unger, Markus; Barnett, Donald A.; Sinha, Sudipta Narayan; Stollnitz, Eric Joel; Kopf, Johannes Peter; Pylvaenaeinen, Timo Pekka; Messer, Christopher Stephen, Translated view navigation for visualizations.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.