A method for merging first and second images includes determining a pixel difference image from the first and the second images, determining first and second locations of the foreground subject from the pixel difference image, determining a minimum path of values from the pixel difference image for
A method for merging first and second images includes determining a pixel difference image from the first and the second images, determining first and second locations of the foreground subject from the pixel difference image, determining a minimum path of values from the pixel difference image for a region between the first and the second locations of the foreground subject, forming a merged image by stitching the first and the second images along the minimum path, and adjusting pixels of the merged image within a width of the minimum path.
대표청구항▼
1. A method to merge first and second images having a foreground subject at different locations of a same background to form a merged image, comprising: determining, by a processor, a pixel difference image from the first and the second images, wherein each pixel of the pixel difference image compri
1. A method to merge first and second images having a foreground subject at different locations of a same background to form a merged image, comprising: determining, by a processor, a pixel difference image from the first and the second images, wherein each pixel of the pixel difference image comprises an absolute difference between corresponding pixels in the first and the second images;from the pixel difference image, determining, by the processor, first and second locations of the foreground subject in the first and the second images, respectively;for a region between the first and the second locations of the foreground subject, determining, by the processor, a minimum path of values extending from a top to a bottom of the merged image from the pixel difference image;forming, by the processor, the merged image by stitching the first and the second images along the minimum path by using pixels from the first image on a first side of the minimum path and using pixels from the second image on a second side of the minimum path; anddisplaying the merged image. 2. The method of claim 1, wherein said determining first and second locations of the foreground subject comprises: projecting the pixel difference image onto an x-axis;determining two peaks separated by a valley in the projection; andsetting x-coordinates of the two peaks as the first and the second locations of the foreground subject. 3. The method of claim 2, wherein said determining two peaks separated by a valley comprises: determining an average of pixel differences on the projection;determining two biggest continuous regions on the projection with pixel differences greater than the average;determining a biggest continuous region on the projection with pixel differences less than the average; andwhen the biggest continuous region is located between the two biggest continuous regions, setting x-coordinates of two largest pixel differences in the two biggest continuous regions as the first and the second locations of the foreground subject. 4. The method of claim 1, further comprising: adjusting pixels of the merged image within a width of the minimum path. 5. The method of claim 4, wherein said adjusting comprises adjusting each pixel in proportion to its horizontal distance from the minimum path. 6. The method of claim 5, wherein said adjusting each pixel comprises determining pixel values as follows: Pleft(x,y)=PL(x,y)+(w′−wleft)×Diff(y), andPright(x,y)=PR(x,y)+(w′−wright)×Diff(y), where Pleft(x,y) is a value of a left pixel of the merged image on the left of the minimum path, PL(x,y) is an original value of the left pixel from one of the first and the second images, w′ is the width about the minimum path, wleft is the distance of the left pixel from the minimum path, Pright(x,y) is a value a right pixel of the merged image on the right of the minimum path, PR(x,y) is an original value of the right pixel from another of the first and the second images, wright is the distance of the right pixel from the minimum path, and Diff(y) is a corresponding value from the minimum path for the left and the right pixels. 7. The method of claim 1, further comprising: determining another pixel difference image from the first image and a third image;from the another pixel difference image, determining a third location of the foreground subject in the third image;determining yet another pixel difference image from the merged image and the third image;for another region between the second and the third locations of the foreground subject, determining another minimum path of values from the yet another pixel difference image;forming another merged image by stitching the merged image and the third image along the another minimum path by using pixels from the merged image on one side of the another minimum path and using pixels from the third image on another side of the another minimum path; andadjusting pixels of the another merged image within a width of the another minimum path. 8. The method of claim 1, further comprising: determining if the foreground subject in the second image and a third image does not overlap;when the foreground subject in the second and the third images does not overlap: determining another pixel difference image from the merged image and the third image;for another region between the second location and a third location of the foreground subject in the third image, determining another minimum path of values from the another pixel difference image;forming another merged image by stitching the merged image and the third image along the another minimum path by using pixels from the merged image on one side of the another minimum path and using pixels from the third image on another side of the another minimum path; andadjusting pixels of the another merged image within a width of the another minimum path. 9. The method of claim 8, wherein said determining if the foreground subject in the second and the third images overlaps comprises: determining a further pixel difference image from the second and the third images;projecting the further pixel difference image onto an x-coordinate, where the foreground subject in the second and the third images does not overlap when there are two distinctive peaks in the projection. 10. A non-transitory computer-readable storage medium encoded with executable instructions for execution by a processor to merge first and second images having a foreground subject at different locations of a same background to form a merged image, the instructions comprising: determining a pixel difference image from the first and the second images, wherein each pixel of the pixel difference image comprises an absolute difference between corresponding pixels in the first and the second images;from the pixel difference image, determining first and second locations of the foreground subject in the first and the second images, respectively;for a region between the first and the second locations of the foreground subject, determining a minimum path of values extending from a top to a bottom of the merged image from the pixel difference image;forming the merged image by stitching the first and the second images along the minimum path by using pixels from the first image on a first side of the minimum path and using pixels from the second image on a second side of the minimum path; anddisplaying the merged image. 11. The non-transitory computer-readable storage medium of claim 10, wherein said determining first and second locations of the foreground subject comprises: projecting the pixel difference image onto an x-axis;determining two peaks separated by a valley in the projection; andsetting x-coordinates of the two peaks as the first and the second locations of the foreground subject. 12. The non-transitory computer-readable storage medium of claim 11, wherein said determining two peaks separated by a valley comprises: determining an average of pixel differences on the projection;determining two biggest continuous regions on the projection with pixel differences greater than the average;determining a biggest continuous region on the projection with pixel differences less than the average; andwhen the biggest continuous region is located between the two biggest continuous regions, setting x-coordinates of two largest pixel differences in the two biggest continuous regions as the first and the second locations of the foreground subject. 13. The non-transitory computer-readable storage medium of claim 10, wherein the instructions further comprise: adjusting pixels of the merged image within a width of the minimum path. 14. The non-transitory computer-readable storage medium of claim 13, wherein said adjusting comprises adjusting each pixel in proportion to its horizontal distance from the minimum path. 15. The non-transitory computer-readable storage medium of claim 14, wherein said adjusting each pixel comprises determining pixel values as follows: Pleft(x,y)=PL(x,y)+(w′−wleft)×Diff(y), andPright(x,y)=PR(x,y)+(w′−wright)×Diff(y), where Pleft(x,y) is a value of a left pixel of the merged image on the left of the minimum path, PL(x,y) is an original value of the left pixel from one of the first and the second images, w′ is the width about the minimum path, wleft is the distance of the left pixel from the minimum path, Pright(x,y) is a value a right pixel of the merged image on the right of the minimum path, PR(x,y) is an original value of the right pixel from another of the first and the second images, wright is the distance of the right pixel from the minimum path, and Diff(y) is a corresponding value from the minimum path for the left and the right pixels. 16. The non-transitory computer-readable storage medium of claim 10, wherein the instructions further comprise: determining another pixel difference image from the first image and a third image;from the another pixel difference image, determining a third location of the foreground subject in the third image;determining yet another pixel difference image from the merged image and the third image;for another region between the second and the third locations of the foreground subject, determining another minimum path of values from the yet another pixel difference image;forming another merged image by stitching the merged image and the third image along the another minimum path by using pixels from the merged image on one side of the another minimum path and using pixels from the third image on another side of the another minimum path; andadjusting pixels of the another merged image within a width of the another minimum path. 17. The non-transitory computer-readable storage medium of claim 10, wherein the instructions further comprise: determining if the foreground subject in the second image and a third image does not overlap;when the foreground subject in the second and the third images does not overlap: determining another pixel difference image from the merged image and the third image;for another region between the second location and a third location of the foreground subject in the third image, determining another minimum path of values from the another pixel difference image;forming another merged image by stitching the merged image and the third image along the another minimum path by using pixels from the merged image on one side of the another minimum path and using pixels from the third image on another side of the another minimum path; andadjusting pixels of the another merged image within a width of the another minimum path. 18. The non-transitory computer-readable storage medium of claim 17, wherein said determining if the foreground subject in the second and the third images overlaps comprises: determining a further pixel difference image from the second and the third images;projecting the further pixel difference image onto an x-coordinate, where the foreground subject in the second and the third images does not overlap when there are two distinctive peaks in the projection.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (9)
Suda Masato (Tokyo JPX) Nakamura Yoshikatu (Tokyo JPX) Takagi Nobuaki (Tokyo JPX), Character reading apparatus.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.