Method of overlap-dependent image stitching for images captured using a capsule camera
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06T-011/60
A61B-001/00
A61B-001/04
출원번호
US-0675744
(2015-04-01)
등록번호
US-9324172
(2016-04-26)
발명자
/ 주소
Wu, Chenyu
Xu, Yi
Wang, Kang-Huai
출원인 / 주소
Capso Vision Inc.
대리인 / 주소
Blairtech Solution LLC
인용정보
피인용 횟수 :
0인용 특허 :
9
초록▼
A method of processing images captured by an in vivo capsule camera is disclosed. The images having large overlap exceeding a threshold are stitched into larger images. If the current image and none of its neighboring images has large overlap, the current image is designated as a non-stitched image.
A method of processing images captured by an in vivo capsule camera is disclosed. The images having large overlap exceeding a threshold are stitched into larger images. If the current image and none of its neighboring images has large overlap, the current image is designated as a non-stitched image. Any image, that exists between two images stitched and is not included in the stitched image, is also designated as a non-stitched image. The large-overlap stitching can be performed on the images iteratively by treating the stitched images and non-stitched image as to be processed images in the next round. A second stage stitching can be applied to stitch small-overlap images. The small-overlap image stitching can also be applied iteratively. A third stage stitching can be further applied to stitch the output images from the second stage processing.
대표청구항▼
1. A method of processing images captured using an in vivo capsule camera, the method comprising: receiving a plurality of images captured by the in vivo capsule camera as to-be-processed images;applying large-overlap image stitching to the to-be-processed images, wherein said applying the large-ove
1. A method of processing images captured using an in vivo capsule camera, the method comprising: receiving a plurality of images captured by the in vivo capsule camera as to-be-processed images;applying large-overlap image stitching to the to-be-processed images, wherein said applying the large-overlap image stitching comprises: generating each of one or more first-stage stitched images by stitching a current to-be-processed image with a previous to-be-processed image or a previously stitched image if a degree of picture overlap between the current to-be-processed image and the previous to-be-processed image or the previously stitched image is larger than a first threshold, wherein the current to-be-processed image is within N1 neighboring to-be-processed images of the previous to-be-processed image or the previously stitch image, and N1 is a positive integer; andgenerating each of one or more first-stage non-stitched images if the degree of picture overlap between the current to-be-processed image and none of N1 neighboring to-be-processed images is larger than the first threshold; andproviding first information associated with said one or more first-stage stitched images if said one or more first-stage stitched images exist and providing said one or more first-stage non-stitched images if said one or more first-stage non-stitched images exist. 2. The method of claim 1, further comprising: repeating said large-overlap image stitching by setting said one or more first-stage stitched images and said one or more first-stage non-stitched images as the to-be-processed images in a next first-stage iteration if at least one first-stage stitched image is generated in a current first-stage iteration; andterminating said large-overlap image stitching if no first-stage stitched image is generated in the current first-stage iteration. 3. The method of claim 2, after said large-overlap image stitching is terminated, further comprising: determining second-stage to-be-processed images consisting of any existing first-stage stitched images and any existing first-stage non-stitched image;applying small-overlap image stitching to the second-stage to-be-processed images, wherein said applying the small-overlap image stitching comprises: generating each of one or more second-stage stitched images by stitching a current second-stage to-be-processed image with a previous second-stage to-be-processed image or a previously second-stage stitched image if the degree of picture overlap between the current second-stage to-be-processed image and the previous second-stage to-be-processed image or the previously second-stage stitched image is smaller than a second threshold, wherein the current second-stage to-be-processed image is within N2 neighboring second-stage to-be-processed images of the previous second-stage to-be-processed image or the previously second-stage stitched image, and N2 is a second positive integer; andgenerating each of one or more second-stage non-stitched images if the degree of picture overlap between the current second-stage to-be-processed image and none of N2 neighboring second-stage to-be-processed images is smaller than the second threshold; andproviding second information associated with said one or more second-stage stitched images if said one or more second-stage stitched images exist and providing said one or more second-stage non-stitched images if said one or more second-stage non-stitched images exist. 4. The method of claim 3, further comprising: repeating said small-overlap image stitching by setting said one or more second-stage stitched images and said one or more second-stage non-stitched images as the second-stage to-be-processed images in a next second-stage iteration if at least one second-stage stitched image is generated in a current second-stage iteration; andterminating said small-overlap image stitching if no second-stage stitched image is generated in the current second-stage iteration. 5. The method of claim 4, after said small-overlap image stitching is terminated, further comprising: determining third-stage to-be-processed images consisting of any existing second-stage stitched images and any existing second-stage non-stitched image;generating third-stage stitched images by applying general image stitching to the third-stage to-be-processed images; andproviding third information associated with the third-stage stitched images. 6. The method of claim 5, wherein said general image stitching comprises detecting clinical features in the third-stage to-be-processed images and inpainting the clinical features detected back into the third-stage stitched images. 7. The method of claim 5, wherein the first information comprises first indices and first image model parameters associated with said one or more first-stage stitched images; the second information comprises second indices and second image model parameters associated with said one or more second-stage stitched images; and the third information comprises third indices and third image model parameters associated with said one or more third-stage stitched images. 8. The method of claim 7, further comprising retrieving the first indices and the first image model parameters associated with said one or more first-stage stitched images, generating said one or more first-stage stitched images based on the first indices and the first image model parameters associated with said one or more first-stage stitched images; retrieving the second indices and the second image model parameters associated with said one or more second-stage stitched images, generating said one or more second-stage stitched images based on the second indices and the second image model parameters associated with said one or more second-stage stitched images; and retrieving the third indices and the third image model parameters associated with said one or more third-stage stitched images, generating said one or more third-stage stitched images based on the third indices and the third image model parameters associated with said one or more third-stage stitched images. 9. The method of claim 4, wherein the first information comprises first indices and first image model parameters associated with said one or more first-stage stitched images, and the second information comprises second indices and second image model parameters associated with said one or more second-stage stitched images. 10. The method of claim 9, further comprising retrieving the first indices and the first image model parameters associated with said one or more first-stage stitched images, generating said one or more first-stage stitched images based on the first indices and the first image model parameters associated with said one or more first-stage stitched images, and retrieving the second indices and the second image model parameters associated with said one or more second-stage stitched images, generating said one or more second-stage stitched images based on the second indices and the second image model parameters associated with said one or more second-stage stitched images. 11. The method of claim 3, wherein if said generating each of said one or more second-stage stitched images by stitching the current second-stage to-be-processed image with the previous second-stage to-be-processed image or the previously second-stage stitched image generates one second-stage stitched image having a horizontal size or a vertical size exceeding a size threshold, said generating each of said one or more second-stage stitched images is skipped. 12. The method of claim 3, wherein the degree of picture overlap between the current second-stage to-be-processed image and the previous second-stage to-be-processed image or the previously second-stage stitched image is determined based on global transformation between the current second-stage to-be-processed image and the previous second-stage to-be-processed image or the previously second-stage stitched image. 13. The method of claim 12, wherein the global transformation is estimated by exhaustive search for intensity-based image matching between the current second-stage to-be-processed image and the previous second-stage to-be-processed image or the previously second-stage stitched image. 14. The method of claim 3, wherein said generating each of said one or more second-stage stitched images by stitching the current second-stage to-be-processed image with the previous second-stage to-be-processed image or the previously second-stage stitched image comprises applying a local transformation to an overlap area of the current second-stage to-be-processed image with the previous second-stage to-be-processed image or the previously second-stage stitched image. 15. The method of claim 14, wherein the local transformation includes free-form deformation cubic B-splines. 16. The method of claim 14, wherein image model parameters required for said generating each of said one or more second-stage stitched images are optimized using a gradient-based process. 17. The method of claim 2, wherein the first information comprises indices and image model parameters associated with said one or more first-stage stitched images after said large-overlap image stitching is terminated. 18. The method of claim 17, further comprising retrieving the indices and the image model parameters associated with said one or more first-stage stitched images, and generating said one or more first-stage stitched images based on the indices and the image model parameters associated with said one or more first-stage stitched images. 19. The method of claim 1, wherein said generating each of said one or more first-stage stitched images from the to-be-processed images comprises applying a local transformation to an overlap area between the current to-be-processed image and the previous to-be-processed image or the previously stitched image. 20. The method of claim 19, wherein the local transformation includes free-form deformation cubic B-splines. 21. The method of claim 19, wherein image model parameters required for said generating each of said one or more first-stage stitched images are optimized using a gradient-based process. 22. The method of claim 1, wherein if said generating each of said one or more first-stage stitched images by stitching the current to-be-processed image with the previous to-be-processed image or the previously stitched image generates one first-stage stitched image having a horizontal size or a vertical size exceeding a size threshold, said generating each of said one or more first-stage stitched images is skipped. 23. The method of claim 1, wherein each first-stage stitched image is generated by stitching the current to-be-processed image with the previous to-be-processed image continuously when the degree of picture overlap between the current to-be-processed image and the previous to-be-processed image or the previously stitched image is larger than the first threshold, and said stitching the current to-be-processed image with the previous to-be-processed image continuously is terminated when a stop criterion is met. 24. The method of claim 23, wherein the stop criterion corresponds to a total number of the to-be-processed images stitched into the first-stage stitched image reaching a threshold or a horizontal size or vertical size of the first-stage stitched image exceeding a threshold. 25. The method of claim 1, wherein each first-stage stitched image is generated by stitching the current to-be-processed image with the previous to-be-processed image and each first-stage stitched image consists of two to-be-processed images. 26. A method of processing images captured using an in vivo capsule camera, the method comprising: receiving a plurality of images captured by the in vivo capsule camera as to-be-processed images;applying small-overlap image stitching to the to-be-processed images, wherein said applying the small-overlap image stitching comprises: generating each of one or more first-stage stitched images by stitching a current to-be-processed image with a previous to-be-processed image or a previously stitched image if a degree of picture overlap between the current to-be-processed image and the previous to-be-processed image or the previously stitched image is smaller than a first threshold, wherein the current to-be-processed image is within N1 neighboring to-be-processed images of the previous to-be-processed image or the previously stitch image, and N1 is a positive integer; andgenerating each of one or more first-stage non-stitched images if the degree of picture overlap between the current to-be-processed image and none of N1 neighboring to-be-processed images is smaller than the first threshold; andproviding first information associated with said one or more first-stage stitched images if said one or more first-stage stitched images exist and providing said one or more first-stage non-stitched images if said one or more first-stage non-stitched images exist.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (9)
Szeliski Richard ; Shum Heung-Yeung, 3-dimensional image rotation method and apparatus for producing image mosaics.
Szeliski Richard ; Shum Heung-Yeung, Image mosaic construction system and apparatus with patch-based alignment, global block adjustment and pair-wise motion-based local warping.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.