IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0692446
(2003-10-22)
|
등록번호 |
US-7409105
(2008-08-05)
|
발명자
/ 주소 |
- Jin,Yiqing
- Huang,Yushan
- Wu,Donghui
- Zhou,Lingxiang
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
28 인용 특허 :
18 |
초록
▼
A method for generating a panoramic image includes receiving a first image, dividing the first image into a first portion and a second portion, rotating the first portion of the first image, saving the rotated first portion of the first image in a nonvolatile memory, receiving a second image, dividi
A method for generating a panoramic image includes receiving a first image, dividing the first image into a first portion and a second portion, rotating the first portion of the first image, saving the rotated first portion of the first image in a nonvolatile memory, receiving a second image, dividing the second image into a third portion and a fourth portion, matching an overlapping region between the second portion of the first image and the third portion of the second image, stitching the second portion of the first image and the third portion of the second image to form a first stitched image, rotating the first stitched image, and saving the first stitched image in the nonvolatile memory.
대표청구항
▼
What is claimed is: 1. A method for generating a panoramic image, comprising: receiving a first image; dividing the first image into a first portion and a second portion; orthogonally rotating the first portion of the first image; saving the rotated first portion of the first image as part of the p
What is claimed is: 1. A method for generating a panoramic image, comprising: receiving a first image; dividing the first image into a first portion and a second portion; orthogonally rotating the first portion of the first image; saving the rotated first portion of the first image as part of the panoramic image in a nonvolatile memory; receiving a second image; dividing the second image into a third portion and a fourth portion; matching an overlapping region between the second portion of the first image and the third portion of the second image; stitching the second portion of the first image and the third portion of the second image to form a first stitched image; orthogonally rotating the first stitched image; saving the first stitched image as part of the panoramic image in the nonvolatile memory; and orthogonally rotating the panoramic image back to an original orientation of the first and the second images and saving the panoramic image in the nonvolatile memory. 2. The method of claim 1, further comprising: after said receiving a first image and prior to said dividing the first image, projecting the first image onto a cylinder to warp the first image; and after said receiving a second image and prior to said dividing the second image, projecting the second image onto the cylinder to warp the second image. 3. A method for generating a panoramic image, comprising: receiving a first image; projecting the first image onto a cylinder to warp the first image; dividing the first image into a first portion and a second portion; rotating the first portion of the first image; saving the rotated first portion of the first image in a nonvolatile memory; receiving a second image; projecting the second image onto the cylinder to warp the second image; dividing the second image into a third portion and a fourth portion; matching an overlapping region between the second portion of the first image and the third portion of the second image; stitching the second portion of the first image and the third portion of the second image to form a first stitched image; rotating the first stitched image; and saving the first stitched image in the nonvolatile memory; wherein said projecting the first image onto a cylinder and said projecting the second image onto the cylinder comprises calculating coordinates of points on the cylinder as follows: wherein x' and y' are the coordinates of each point on the cylinder, x and y are the coordinates of each points on the first image and the second image, and f is the focus length of the camera. 4. The method of claim 1, wherein said matching the second portion of the first image and the third portion of the second image comprises matching shared features between the second portion of the first image and a sub-portion of the third portion of the second image. 5. The method of claim 4, wherein said matching shared features between the second portion of the first image and a sub-portion of the third portion of the second image comprises: generating a first level of the second portion of the first image at a first resolution; generating a second level of the third portion of the second image at the first resolution; selecting at least a first feature on the first level of the first image; searching the second level of the second image for the first feature; and matching the first feature between the first level of the first image and the second level of the second image to determine a first relative motion between the first image and the second image. 6. The method of claim 5, wherein said matching shared features between the second portion of the first image and a sub-portion of the third portion of the second image further comprises: matching pixels in the second portion of the first image and the third portion of the second image based on the first relative motion between the first image and the second image. 7. The method of claim 5, wherein said matching shared features between the second portion of the first image and a sub-portion of the third portion of the second image further comprises: generating a third level of the second portion of the first image at a second resolution that is greater than the first resolution; generating a fourth level of the third portion of the second image at the second resolution; selecting at least a second feature on the third level of the first image; searching an area on the fourth level of the second image for the second feature, wherein the area is selected based on the first relative motion between the first image and the second image; matching the second feature between the third level of the first image and the fourth level of the second image to determine a second relative motion between the first image and the second image; and matching pixels in the second portion of the first image and the third portion of the second image based on the second relative motion between the first image and the second image. 8. A method for generating a panoramic image, comprising: receiving a first image; dividing the first image into a first portion and a second portion; rotating the first portion of the first image; saving the rotated first portion of the first image in a nonvolatile memory; receiving a second image; dividing the second image into a third portion and a fourth portion; matching an overlapping region between the second portion of the first image and the third portion of the second image; stitching the second portion of the first image and the third portion of the second image to form a first stitched image, comprising: determining a minimum color difference path in the overlapping region, comprising: determining a color difference map of the overlapping region; and determining a path that has a lowest sum of color differences of pixels from the color difference map; filling a first side of the minimum color difference path with color values from the first image; and filling a second side of the minimum color difference path with color values from the second image; rotating the first stitched image; and saving the first stitched image in the nonvolatile memory. 9. The method of claim 8, further comprising blending the overlapping region if a color difference between the first side and the second side of a scan line is less than a threshold, comprising: blending the color values of the first image and the second image along a blending width of the minimum color difference path. 10. The method of claim 9, wherein said blending the color values of the first image and the second image comprises: adjusting the color values of the first image and the second image along the blending width using a value C(x) defined by: where C(x) is the color value to be added to or subtracted from a pixel located x away from pixel (i,j) on the minimum color difference path, dij is the color difference of pixel (i,j), and W is the blending width. 11. The method of claim 10, wherein the value C(x) is (1) added to the color values of the first image and subtracted from the second image or (2) subtracted from the color values of the first image and added to the second image. 12. The method of claim 10, wherein the width is the largest integer 2 n that is less than the width of the second portion of the first image and, division operations in calculating the parameter C(x) comprises shift operations. 13. The method of claim 1, further comprising: receiving a third image; dividing the third image into a fifth portion and a sixth portion; matching the fourth portion of the second image and the fifth portion of the third image; stitching the fourth portion of the second image and the fifth portion of the third image to form a second stitched image; rotating the second stitched image; and saving the second stitched image as part of the panoramic image in the nonvolatile memory. 14. The method of claim 1, wherein said saving the rotated first portion of the first image, said saving the first stitched image, and said saving the panoramic image comprising saving in a JPEG format in the nonvolatile memory. 15. The method of claim 8, wherein said determining a path that has a lowest sum of color differences of pixels from the color difference map comprises using a weighted color difference for each pixel in the color difference map, the weighted color difference for each pixel being defined as a sum of pixel values of the pixel and its five lower neighbors from the color difference map.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.