IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0099335
(2011-05-02)
|
등록번호 |
US-8170350
(2012-05-01)
|
발명자
/ 주소 |
- Steinberg, Eran
- Prilutsky, Yury
- Corcoran, Peter
- Bigioi, Petronel
|
출원인 / 주소 |
- DigitalOptics Corporation Europe Limited
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
6 인용 특허 :
83 |
초록
▼
An analysis and classification tool compares at least a portion of a captured image and a reference image of nominally the same scene. One of the captured and reference images is taken with flash and the other is taken without flash. The tool provides a measure of the difference in illumination betw
An analysis and classification tool compares at least a portion of a captured image and a reference image of nominally the same scene. One of the captured and reference images is taken with flash and the other is taken without flash. The tool provides a measure of the difference in illumination between the captured image and the reference image. The tool compares the measure with a threshold and segments a foreground region from a background region based on the measure.
대표청구항
▼
1. A digital image acquisition system having no photographic film, comprising: (a) an apparatus for capturing digital images, including a lens, an image sensor and a processor;(b) a flash unit for providing illumination during image capture;(c) an analysis tool for using two or more images of a sequ
1. A digital image acquisition system having no photographic film, comprising: (a) an apparatus for capturing digital images, including a lens, an image sensor and a processor;(b) a flash unit for providing illumination during image capture;(c) an analysis tool for using two or more images of a sequence of images of approximately the same scene to create a motion map, said analysis tool providing a measure of relative differences in motion between regions within said scene; and(d) a classification tool for segmenting a foreground region from a background region within said scene based on said measure. 2. A system according to claim 1 further comprising a segmentation tool for determining one or more regions that are indicative of the foreground region, or of the background region, or of the background region and the foreground region, within at least one portion of a captured image, and wherein said analysis tool is arranged to analyse said foreground region or said background region, or both. 3. A system according to claim 1, wherein said classification tool is responsive to said measure exceeding a high threshold to classify a region as a background region, and responsive to said measure not exceeding a low threshold to classify a region as a foreground region. 4. A system according to claim 3, wherein said high and low threshold are coincident. 5. A system according to claim 1, wherein the analysis tool simulates an ambient exposure of one of the images on another of the images including digitally simulating one or a combination of aperture, acquisition speed, color transformations or gain of the captured image on the reference image. 6. A system according to claim 5, wherein the simulating of the ambient exposure of the captured image comprises individual, non-uniform manipulating of individual regions or color channels or combinations thereof. 7. A system according to claim 1, wherein at least in respect of at least one portion, the segmentation tool determines corresponding pixels in the at least two images whose values differ by less than a predetermined threshold, and designates segments of the scene bounded by said determined pixels as foreground or background by comparing motion values in different segments within the scene. 8. A system according to claim 6, further comprising a face detection module. 9. A method of foreground/background segmentation in a captured digital image, comprising: using a processor to perform said method of foreground/background segmentation;analyzing two or more images of a sequence of images of approximately the same scene to create a motion map,determining a measure of relative differences in motion between regions within said scene based on said motion map; andclassifying a foreground region segmented from a background region within said scene based on said measure. 10. A method as in claim 9, further comprising determining one or more regions that are indicative of the foreground region, or of the background region, or of the background region and the foreground region, within at least one portion of a captured image, and wherein said analysis tool is arranged to analyse said foreground region or said background region, or both. 11. A method as in claim 9, wherein said classifying is performed when said measure exceeds a high threshold to classify a region as a background region, and when said measure does not exceed a low threshold to classify a region as a foreground region. 12. A method as in claim 11, wherein said high and low thresholds are coincident. 13. A method as in claim 9, further comprising simulating of an ambient exposure of one of the images on another of the images including digitally simulating one or a combination of aperture, acquisition speed, color transformations or gain of the captured image on the reference image. 14. A method as in claim 13, wherein the simulating of the ambient exposure of the captured image comprises individual, non-uniform manipulating of individual regions or color channels or combinations thereof. 15. A method as in claim 9, wherein at least in respect of at least one portion, the classifying includes determining corresponding pixels in the at least two images whose values differ by less than a predetermined threshold, and designating segments of the scene bounded by said determined pixels as foreground or background by comparing motion values in different segments within the scene. 16. A method as in claim 9, further comprising detecting a face within the scene, and classifying the face as foreground. 17. One or more non-transitory, processor-readable media having code embedded therein for programming one or more processors to perform a method of foreground/background segmentation in a captured digital image, wherein the method comprises: analyzing two or more images of a sequence of images of approximately the same scene to create a motion map,determining a measure of relative differences in motion between regions within said scene based on said motion map; andclassifying a foreground region segmented from a background region within said scene based on said measure. 18. One or more non-transitory, processor-readable media as in claim 17, wherein the method further comprises determining one or more regions that are indicative of the foreground region, or of the background region, or of the background region and the foreground region, within at least one portion of a captured image, and wherein said analysis tool is arranged to analyse said foreground region or said background region, or both. 19. One or more non-transitory, processor-readable media as in claim 17, wherein said classifying is performed when said measure exceeds a high threshold to classify a region as a background region, and when said measure does not exceed a low threshold to classify a region as a foreground region. 20. One or more non-transitory, processor-readable media as in claim 19, wherein said high and low thresholds are coincident. 21. One or more non-transitory, processor-readable media as in claim 17, wherein the method further comprises simulating of an ambient exposure of one of the images on another of the images including digitally simulating one or a combination of aperture, acquisition speed, color transformations or gain of the captured image on the reference image. 22. One or more non-transitory, processor-readable media as in claim 21, wherein the simulating of the ambient exposure of the captured image comprises individual, non-uniform manipulating of individual regions or color channels or combinations thereof. 23. One or more non-transitory, processor-readable media as in claim 17, wherein at least in respect of at least one portion, the classifying includes determining corresponding pixels in the at least two images whose values differ by less than a predetermined threshold, and designating segments of the scene bounded by said determined pixels as foreground or background by comparing motion values in different segments within the scene. 24. One or more non-transitory, processor-readable media as in claim 17, wherein the method further comprises detecting a face within the scene, and classifying the face as foreground.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.