Image matching using resolution pyramids with geometric constraints
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-003/00
G06T-011/60
출원번호
US-0665917
(2000-09-20)
발명자
/ 주소
Zhou, Lingxiang
출원인 / 주소
ArcSoft, Inc.
대리인 / 주소
Patent Law Group LLP
인용정보
피인용 횟수 :
30인용 특허 :
10
초록▼
An image matching method for matching a first image and an overlapping second image includes generating a first set of working layers of the first image and a second set of working layers of the second image. The method determines an overlap between an initial working layer of the first image and an
An image matching method for matching a first image and an overlapping second image includes generating a first set of working layers of the first image and a second set of working layers of the second image. The method determines an overlap between an initial working layer of the first image and an initial working layer of the second image where the initial working layers have a smaller pixel array size and a lower image resolution than the other working layers. The method selects a feature point in the working layer of the first image and determines a position in the working layer of the second image corresponding to the feature point. The method then determines the motion parameters based on the feature point and the position in the first and second images. Finally, the method repeats the selection of a feature point using another working layer of the first image and another working layer of the second image, each of these working layers has a larger pixel array size and a higher image resolution than the initial working layers.
대표청구항▼
1. A method for matching a first image to a second image, said first image overlapping said second image, comprising:(a) generating a first set of working layers of said first image, said first set of working layers comprising a plurality of working layers having successively increasing resolutions
1. A method for matching a first image to a second image, said first image overlapping said second image, comprising:(a) generating a first set of working layers of said first image, said first set of working layers comprising a plurality of working layers having successively increasing resolutions from a first working layer having the coarsest resolution to a last working layer having the finest resolution;(b) generating a second set of working layers of said second image, said second set of working layers comprising a plurality of working layers having successively increasing resolutions from a first working layer having the coarsest resolution to a last working layer having the finest resolution;(c) determining using correlation an overlap region between said first working layer of said first image and said first working layer of said second image, said overlap region being defined by translational displacement parameters only;(d) selecting said first working layer of said first image;(e) selecting a plurality of feature points in said first working layer of said first image;(f) determining from within a plurality of attention areas and using the translational displacement parameters a plurality of positions in said first working layer of the second image corresponding to said plurality of feature points in said first image;(g) determining motion parameters based on said plurality of feature points and said plurality of positions, said motion parameters describing the positional relationship between said first working layer of said first image and said first working layer of said second image;(h) selecting a next working layer in said first set of working layers of said first image, the next working layer being a working layer with the next higher resolution than said first working layer;(i) selecting a second next working layer in said second set of working layers of said second image, the second next working layer having the same resolution as said next working layer of said first image;(j) selecting a plurality of feature points in said next working layer of said first image;(k) determining from within a plurality of attention areas and using the motion parameters a plurality of positions in said second next working layer of the second image corresponding to said plurality of feature points in said first image;(l) determining motion parameters based on said plurality of feature points and said plurality of positions, said motion parameters describing the positional relationship between said next working layer of said first image and said second next working layer of said second image;(m) selecting another next working layer in said first set of working layers of said first imaging, the next working layer being a working layer with the next higher resolution than the working layer just processed; and(n) repeating (i) to (m) using said next working layer of said first image and said second next working layer of said second image until said last working layer of said first image and said last working layer of said second image have been processed. 2. The method of claim 1, wherein said selecting a plurality of feature points in said first working layer and said selecting a plurality of feature points in said next working layer each comprises:selecting a plurality of image points in the respective working layer of said first image wherein a pixel brightness and an edge orientation of each of said image points change rapidly. 3. The method of claim 1, wherein said selecting a plurality of feature points in said first working layer and said selecting a plurality of feature points in said next working layer each comprises:selecting a plurality of image corners in the respective working layer of said first image. 4. The method of claim 1, wherein said determining a plurality of positions in said first working layer of said second image and said determining a plurality of positions in said second next working layer of said secon d image each comprises:calculating a plurality of predicted positions in the respective working layer corresponding to each of said plurality of feature points;selecting a plurality of search areas as said plurality of attention areas, each search area being associated with and surrounding one of said plurality of predicted positions; andsearching within each of said plurality of search areas for the position corresponding to the respective one of said plurality of feature points. 5. The method of claim 4, wherein each of said plurality of search areas is a 15×15 pixels area. 6. The method of claim 4, wherein said calculating a plurality of predicted positions in the respective working layer corresponding to each of said plurality of feature points comprises:calculating each of said plurality of predicted positions in said first working layer of said second image based on said translational displacement parameters; andcalculating each of said plurality of predicted positions in said second next working layer of said second image based on said motion parameters. 7. The method of claim 6, wherein said calculating each of said plurality of predicted positions in said second next working layer of said second image based on said motion parameters:calculating said plurality of predicted positions based on a focus length and a rotational relationship between said next working layer of said first image and said second next working layer of said second image. 8. The method of claim 1, wherein said determining motion parameters comprises:determining a focal length of said first image and said second image based on said plurality of feature points and said plurality of positions. 9. The method of claim 1, wherein said determining motion parameters comprises:determining a rotation matrix indicative of a rotational relationship between said next working layer of said first image and said second next working layer of said second image, wherein the rotation matrix is determined using singular value decomposition by optimizing the median error. 10. The method of claim 1, wherein said first image overlaps said second image by 25 percent. 11. The method of claim 1, wherein each of said first working layers of said first image and said second image has a pixel array size between 32×32 pixels and 64×64 pixels. 12. A computer program for matching a first image to a second image, said first image overlapping said second image, embodied in a computer readable medium, the computer program comprising instructions for:(a) generating a first set of working layers of said first image, said first set of working layers comprising a plurality of working layers having successively increasing resolutions from a first working layer having the coarsest resolution to a last working layer having the finest resolution;(b) generating a second set of working layers of said second image, said second set of working layers comprising a plurality of working layers having successively increasing resolutions from a first working layer having the coarsest resolution to a last working layer having the finest resolution;(c) determining using correlation an overlap region between said first working layer of said first image and said first working layer of said second image, said overlap region being defined by translational displacement parameters only;(d) selecting said first working layer of said first image;(e) selecting a plurality of feature points in said first working layer of said first image;(f) determining from within a plurality of attention areas and using the translational displacement parameters a plurality of positions in said first working layer of the second image corresponding to said plurality of feature points in said first image;(g) determining motion parameters based on said plurality of feature points and said plurality of positions, said motion parameters describing the positional relationship between said first working layer of said fir st image and said first working layer of said second image;(h) selecting a next working layer in said first set of working layers of said first image, the next working layer being a working layer with the next higher resolution than said first working layer;(i) selecting a second next working layer in said second set of working layers of said second image, the second next working layer having the same resolution as said next working layer of said first image;(j) selecting a plurality of feature points in said next working layer of said first image;(k) determining from within a plurality of attention areas and using the motion parameters a plurality of positions in said second next working layer of the second image corresponding to said plurality of feature points in said first image;(l) determining motion parameters based on said plurality of feature points and said plurality of positions, said motion parameters describing the positional relationship between said next working layer of said first image and said second next working layer of said second image;(m) selecting another next working layer in said first set of working layers of said first imaging, the next working layer being a working layer with the next higher resolution than the working layer just processed; and(n) repeating (i) to (m) using said next working layer of said first image and said second next working layer of said second image until said last working layer of said first image and said last wording layer of said second image have been processed. 13. The computer program of claim 12, wherein said selecting a plurality of feature points in said first working layer and said selecting a plurality of feature points in said next working layer each comprises:selecting a plurality of image points in the respective working layer of said first image wherein a pixel brightness and an edge orientation of each of said image points change rapidly. 14. The computer program of claim 12, wherein said selecting a plurality of feature points in said first working layer and said selecting a plurality of feature points in said next working layer each comprises:selecting a plurality of image corners in the respective working layer of said first image. 15. The computer program of claim 12, wherein said determining a plurality of positions in said first working layer of said second image and said determining a plurality of positions in said second next working layer of said second image each comprises:calculating a plurality of predicted positions in the respective working layer corresponding to each of said plurality of feature points;selecting a plurality of search areas as said plurality of attention areas, each search area being associated with and surrounding one of said plurality of predicted positions; andsearching within each of said plurality of search areas for the position corresponding to the respective one of said plurality of feature points. 16. The computer program of claim 15, wherein each of said plurality of search areas is a 15×15 pixels area. 17. The computer program of claim 15, wherein said calculating a plurality of predicted positions in the respective working layer corresponding to each of said plurality of feature points comprises:calculating each of said plurality of predicted positions in said first working layer of said second image based on said translational displacement parameters; andcalculating each of said plurality of predicted positions in said second next working layer of said second image based on said motion parameters. 18. The computer program of claim 17, wherein said calculating each of said plurality of predicted positions in said second next working layer of said second image based on said motion parameters:calculating said plurality of predicted positions based on a focus length and a rotational relationship between said next working layer of said first image and said second next working layer of said second image. 19. The compu ter program of claim 12, wherein said determining motion parameters comprises:determining a focal length of said first image and said second image based on said plurality of feature points and said plurality of positions. 20. The computer program of claim 12, wherein said determining motion parameters comprises:determining a rotation matrix indicative of a rotational relationship between said next working layer of said first image and said second next working layer of said second image, wherein the rotation matrix is determined using singular value decomposition by optimizing the median error. 21. The computer program of claim 12, wherein said first image overlaps said second image by 25 percent. 22. The computer program of claim 12, wherein each of said first working layers of said first image and said second image has a pixel array size between 32×32 pixels and 64×64 pixels.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (10)
Szeliski Richard ; Shum Heung-Yeung, 3-dimensional image rotation method and apparatus for producing image mosaics.
Hsu Stephen Charles ; Kumar Rakesh ; Sawhney Harpreet Singh ; Bergen James R. ; Dixon Doug ; Paragano Vince ; Gendel Gary, Method and apparatus for performing local to global multiframe alignment to construct mosaic images.
Haacke E. Mark (University Heights OH) Liang Zhi-pei (Cleveland OH), Parametric image reconstruction using a high-resolution, high signal-to-noise technique.
Rosser Roy (Princeton NJ) Das Subhodev (Princeton NJ) Tan Yi (Plainsboro NJ) von Kaenel Peter (Plainsboro NJ), Pattern recognition system employing unlike templates to detect objects having distinctive features in a video field.
Yaginuma, Shigeru; Kodera, Tetsuhiro; Yamamoto, Kazuto; Ikusawa, Takeshi; Yamamoto, Tomoko, Image processing apparatus and computer readable storage medium stored with control program of image processing apparatus.
Henderson, David L.; Kenny, Kevin B.; Padfield, Dirk R.; Gao, Dashan; McKay, Richard R.; Baxi, Vipul A.; Filkins, Robert J.; Montalto, Michael C., Image quality assessment including comparison of overlapped margins.
Tatke, Lokesh M.; Gammage, Christopher L.; Monroe, Robert J.; Loney, Gregory C., Modes and interfaces for observation, and manipulation of digital images on computer screen in support of pathologist's workflow.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.