Detecting orientation of digital images using face detection information
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/32
G06K-009/62
G06K-009/00
출원번호
UP-0024046
(2004-12-27)
등록번호
US-7565030
(2009-07-29)
발명자
/ 주소
Steinberg, Eran
Prilutsky, Yury
Corcoran, Peter
Bigioi, Petronel
Blonk, Leo
Gangea, Mihnea
Vertan, Constantin
출원인 / 주소
FotoNation Vision Limited
대리인 / 주소
Smith, Andrew V.
인용정보
피인용 횟수 :
68인용 특허 :
134
초록▼
A method of automatically establishing the correct orientation of an image using facial information. This method is based on the exploitation of the inherent property of image recognition algorithms in general and face detection in particular, where the recognition is based on criteria that is highl
A method of automatically establishing the correct orientation of an image using facial information. This method is based on the exploitation of the inherent property of image recognition algorithms in general and face detection in particular, where the recognition is based on criteria that is highly orientation sensitive. By applying a detection algorithm to images in various orientations, or alternatively by rotating the classifiers, and comparing the number of successful faces that are detected in each orientation, one may conclude as to the most likely correct orientation. Such method can be implemented as an automated method or a semi automatic method to guide users in viewing, capturing or printing of images.
대표청구항▼
What is claimed is: 1. A method of detecting an orientation of a digital image using statistical classifier techniques comprising: using a processor to perform the following steps: (a) applying a set of classifiers to a digital image in a first orientation and determining a first level of match bet
What is claimed is: 1. A method of detecting an orientation of a digital image using statistical classifier techniques comprising: using a processor to perform the following steps: (a) applying a set of classifiers to a digital image in a first orientation and determining a first level of match between said digital image at said first orientation and said classifiers; (b) rotating said digital image to a second orientation, applying the classifiers to said rotated digital image at said second orientation, and determining a second level of match between said rotated digital image at said second orientation and said classifiers; (c) comparing said first and second levels of match between said classifiers and said digital image and between said classifiers and said rotated digital image, respectively; and (d) determining which of the first orientation and the second orientations has a greater probability of being a correct orientation based on which of the first and second levels of match, respectively, comprises a higher level of match; (e) rotating said digital image to a third orientation, applying the classifiers to said rotated digital image at said third orientation, and determining a third level of match between said rotated digital image at said third orientation and said classifiers; (f) comparing said third level of match with said first level of match or said second level of match, or both; and (g) determining which of two or more of the first orientation, the second orientations and the third orientation has a greater probability of being a correct orientation based on which of the two or more the first, second and third levels of match, respectively, comprises a higher level of match. 2. The method of claim 1, wherein the rotating to the second orientation and the rotating to the third orientation comprise rotations in opposite directions. 3. The method of claim 1, wherein the second orientation and the third orientation comprise orientations of the digital image that are relatively rotated from the first orientation, each by an acute or obtuse amount, in opposite directions. 4. The method of claim 1, wherein said classifiers comprise face detection classifiers. 5. The method of claim 4, wherein said classifiers comprise elliptical classifiers. 6. The method of claim 5, wherein said elliptical classifiers are oriented at known orientations. 7. The method of claim 4, wherein said classifiers correspond to regions of a detected face. 8. The method of claim 7, wherein said regions include an eye, two eyes, a nose, a mouth, or an entire face, or combinations thereof. 9. The method of claim 1, wherein said classifiers comprise color based classifiers. 10. The method of claim 1, wherein said classifiers comprise image classifiers for scene analysis. 11. The method of claim 10, wherein said classifiers based on scene analysis comprise perception-based classifiers. 12. The method of claim 1, wherein said classifiers comprise face detection classifiers, color classifiers, semantic-based classifiers, scene analysis classifiers, or combinations thereof. 13. The method of claim 1, further comprising preparing said digital image prior to applying said classifiers to said digital image and determining said level of match between said digital image and said classifiers. 14. The method of claim 13, wherein said preparing said digital image comprises subsampling, color conversion, edge enhancement, blurring, sharpening, tone reproduction correction, exposure correction, gray scale transformation, region segmentation, cropping, or combinations thereof. 15. The method of claim 13, wherein said preparing said digital image includes subsampling. 16. The method of claim 13, wherein said preparing said digital image includes image quality correcting. 17. The method of claim 1, farther comprising; (h) rotating said digital image to a fourth orientation, applying the classifiers to said rotated digital image at said fourth orientation, and determining a fourth level of match between said rotated digital image at said fourth orientation and said classifiers; (f) comparing said fourth level of match with said first level of match, said second level of match, or said third level of match, or combinations thereof; and (g) determining which of two or more of the first orientation, the second orientation, the third orientation, and the fourth orientation has a greater probability of being a correct orientation based on which of the two or more of the first, second, third, and fourth levels of match, respectively, comprises a higher level of match. 18. The method of claim 17, wherein said second and third orientations comprise 90° opposite rotations of said digital image from said first orientation, and said fourth rotations comprises a 180° rotation of said digital image from said first orientation. 19. The method of claim 1, wherein said second and third orientations comprise 90° opposite rotations of said digital image from said first orientation. 20. A method of detecting an orientation of a digital image using statistical classifier techniques comprising: using a processor to perform the following steps: (a) applying a set of classifiers to a digital image in a first orientation and determining a first level of match between said digital image at said first orientation and said classifiers; (b)rotating said set of classifiers a first predetermined amount, applying the classifiers rotated said first amount to said digital image at said first orientation, and determining a second level of match between said digital image at said first orientation and said classifiers rotated said first amount (c) comparing said first and second levels of match between said classifiers and said digital image and between said rotated classifiers and said digital image, respectively; and (d) determining which of the first and second levels of match, respectively, comprises a higher level of match in order to determine whether said first orientation is a correct orientation of said digital image; (e) rotating said set of classifiers a second predetermined amount, applying the classifiers rotated said second amount to said digital image at said first orientation, and determining a third level of match between said digital image at said first orientation and said classifiers rotated said second amount; (f) comparing said third level of match with said first level of match or said second level of match, or both; and (g) determining which of two or more of the first orientation, the second orientations and the third orientation has a greater probability of being a correct orientation based on which of the two or more of the first, second and third levels of match, respectively, comprises a higher level of match. 21. The method of claim 20, wherein the rotating by the first and second amounts comprise rotations in opposite directions. 22. The method of claim 20, wherein the first and second amounts comprise acute or obtuse amounts equal in magnitude and opposite in direction. 23. The method of claim 20, wherein said classifiers comprise face detection classifiers. 24. The method of claim 23, wherein said classifiers comprise elliptical classifiers. 25. The method of claim 24, wherein said elliptical classifiers are initially oriented at known orientations and, when rotated by said first and second amounts, are rotated to different known orientations. 26. The method of claim 23, wherein said classifiers correspond to regions of a detected face. 27. The method of claim 26, wherein said regions include an eye, two eyes, a nose, a mouth, or an entire face, or combinations thereof. 28. The method of claim 20, wherein said classifiers comprise color based classifiers. 29. The method of claim 20, wherein said classifiers comprise image classifiers for scene analysis. 30. The method of claim 29, wherein said classifiers based on scene analysis comprise perception-based classifiers. 31. The method of claim 20, wherein said classifiers comprise face detection classifiers, color classifiers, semantic-based classifiers, scene analysis classifiers, or combinations thereof. 32. The method of claim 20, further comprising preparing said digital image prior to applying said classifiers to said digital image and determining said level of match between said digital image and said classifiers. 33. The method of claim 32, wherein said preparing said digital image comprises subsampling, color conversion, edge enhancement, blurring, sharpening, tone reproduction correction, exposure correction, gray scale transformation, region segmentation, cropping, or combinations thereof. 34. The method of claim 32, wherein said preparing said digital image includes subsampling. 35. The method of claim 32, wherein said preparing said digital image includes image quality correcting. 36. The method of claim 20, farther comprising; (h) rotating said set of classifiers a third predetermined amount, applying the classifiers rotated by said third amount to said digital image at said first orientation, and determining a fourth level of match between said digital image at said first orientation and said classifiers rotated by said third amount; (f) comparing said fourth level of match with two or more of said first level of match, said second level of match, and said third level of match; and (g) determining which of the two or more of the unrotated classifiers, and those rotated by the first amount, the second amount, and the third amount has a greater probability of matching the first orientation of the digital image based on which of the two or more of the first, second, third, and fourth levels of match, respectively, comprises a higher level of match. 37. The method of claim 36, wherein said first and second amounts comprise 90° opposite rotations of said set of classifiers from an initial orientation, and said third amount comprises a 180° rotation of said set of classifiers. 38. The method of claim 20, wherein said first and second amounts comprise 90° opposite rotations of said set of classifiers from an initial orientation. 39. One or more processor readable storage devices having processor readable code embodied thereon, said processor readable code for programming one or more processors to perform a method of detecting an orientation of a digital image using statistical classifier techniques, the method comprising: (a) applying a set of classifiers to a digital image in a first orientation and determining a first level of match between said digital image at said first orientation and said classifiers; (b) rotating said digital image to a second orientation, applying the classifiers to said rotated digital image at said second orientation, and determining a second level of match between said rotated digital image at said second orientation and said classifiers; (c) comparing said first and second levels of match between said classifiers and said digital image and between said classifiers and said rotation digital image, respectively; and (d) determining which of the first orientation and the second orientations has a greater probability of being a correct orientation based on which of the first and second levels of match, respectively, comprises a higher level of match (e) rotating said digital image to a third orientation, applying the classifiers to said rotated digital image at said third orientation, and determining a third level of match between said rotated digital image at said third orientation and said classifiers; (f) comparing said third level of match with said first level of match or said second level of match, or both; and (g) determining which of two or more of the first orientation, the second orientations and the third orientation has a greater probability of being a correct orientation based on which of the two or more the first, second and third levels of match, respectively, comprises a higher level of match. 40. The one or more storage devices of claim 39, wherein the rotating to the second orientation and the rotating to the third orientation comprise rotations in opposite directions. 41. The one or more storage devices of claim 39, wherein the second orientation and the third orientation comprise orientations of the digital image that are relatively rotated from the first orientation, each by an acute or obtuse amount, in opposite directions. 42. The one or more storage devices of claim 39, wherein said classifiers comprise face detection classifiers. 43. The one or more storage devices of claim 42, wherein said classifiers comprise elliptical classifiers. 44. The one or more storage devices of claim 43, wherein said elliptical classifiers are oriented at known orientations. 45. The one or more storage devices of claim 42, wherein said classifiers correspond to regions of a detected face. 46. The one or more storage devices of claim 45, wherein said regions include an eye, two eyes, a nose, a mouth, or an entire face, or combinations thereof. 47. The one or more storage devices of claim 39, wherein said classifiers comprise color based classifiers. 48. The one or more storage devices of claim 39, wherein said classifiers comprise image classifiers for scene analysis. 49. The one or more storage devices of claim 48, wherein said classifiers based on scene analysis comprise perception-based classifiers. 50. The one or more storage devices of claim 39, wherein said classifiers comprise face detection classifiers, color classifiers, semantic-based classifiers, scene analysis classifiers, or combinations thereof. 51. The one or more storage devices of claim 39, the method further comprising preparing said digital image prior to applying said classifiers to said digital image and determining said level of match between said digital image and said classifiers. 52. The one or more storage devices of claim 51, wherein said preparing said digital image comprises subsampling, color conversion, edge enhancement, blurring, sharpening, tone reproduction correction, exposure correction, gray scale transformation, region segmentation, cropping, or combinations thereof. 53. The one or more storage devices of claim 51, wherein said preparing said digital image includes subsampling. 54. The one or more storage devices of claim 51, wherein said preparing said digital image includes image quality correcting. 55. The one or more storage devices of claim 39, the method further comprising; (h) rotating said digital image to a fourth orientation, applying the classifiers to said rotated digital image at said fourth orientation, and determining a fourth level of match between said rotated digital image at said fourth orientation and said classifiers; (f) comparing said fourth level of match with said first level of match, said second level of match, or said third level of match, or combinations thereof; and (g) determining which of two or more of the first orientation, the second orientation, the third orientation, and the fourth orientation has a greater probability of being a correct orientation based on which of the two or more of the first, second, third, and fourth levels of match, respectively, comprises a higher level of match. 56. The one or more storage devices of claim 55, wherein said second and third orientations comprise 90° opposite rotations of said digital image from said first orientation, and said fourth rotations comprises a 180° rotation of said digital image from said first orientation. 57. The one or more storage devices of claim 39, wherein said second and third orientations comprise 90° opposite rotations of said digital image from said first orientation. 58. One or more processor readable storage devices having processor readable code embodied thereon, said processor readable code for programming one or more processors to perform a method of detecting an orientation of a digital image using statistical classifier techniques, the method comprising: (a) applying a set of classifiers to a digital image in a first orientation and determining a first level of match between said digital image at said first orientation and said classifiers; (b) rotating said set of classifiers a first predetermined amount, applying the classifiers rotated said first amount to said digital image at said first orientation, and determining a second level of match between said digital image at said first orientation and said classifiers rotated said first amount; (c) comparing said first and second levels of match between said classifiers and said digital image and between said rotated classifiers and said digital image, respectively; and (d) determining which of the first and second levels of match, respectively, comprises a higher level of match in order to determine whether said first orientation is a correct orientation of said digital image; (e) rotating said set of classifiers a second predetermined amount, applying the classifiers rotated said second amount to said digital image at said first orientation, and determining a third level of match between said digital image at said first orientation and said classifiers rotated said second amount; (f) comparing said third level of match with said first level of match or said second level of match, or both; and (g) determining which of two or more of the first orientation, the second orientations and the third orientation has a greater probability of being a correct orientation based on which of the two or more of the first, second and third levels of match, respectively, comprises a higher level of match. 59. The one or more storage devices of claim 58, wherein the rotating by the first and second amounts comprise rotations in opposite directions. 60. The one or more storage devices of claim 58, wherein the first and second amounts comprise acute or obtuse amounts equal in magnitude and opposite in direction. 61. The one or more storage devices of claim 58, wherein said classifiers comprise face detection classifiers. 62. The one or more storage devices of claim 61, wherein said classifiers comprise elliptical classifiers. 63. The one or more storage devices of claim 62, wherein said elliptical classifiers are initially oriented at known orientations and, when rotated by said first and second amounts, are rotated to different known orientations. 64. The one or more storage devices of claim 61, wherein said classifiers correspond to regions of a detected face. 65. The one or more storage devices of claim 64, wherein said regions include an eye, two eyes, a nose, a mouth, or an entire face, or combinations thereof. 66. The one or more storage devices of claim 58, wherein said classifiers comprise color based classifiers. 67. The one or more storage devices of claim 58, wherein said classifiers comprise image classifiers for scene analysis. 68. The one or more storage devices of claim 67, wherein said classifiers based on scene analysis comprise perception-based classifiers. 69. The one or more storage devices of claim 58, wherein said classifiers comprise face detection classifiers, color classifiers, semantic-based classifiers, scene analysis classifiers, or combinations thereof. 70. The one or more storage devices of claim 58, the method further comprising preparing said digital image prior to applying said classifiers to said digital image and determining said level of match between said digital image and said classifiers. 71. The one or more storage devices of claim 70, wherein said preparing said digital image comprises subsampling, color conversion, edge enhancement, blurring, sharpening, tone reproduction correction, exposure correction, gray scale transformation, region segmentation, cropping, or combinations thereof. 72. The one or more storage devices of claim 70, wherein said preparing said digital image includes subsampling. 73. The one or more storage devices of claim 70, wherein said preparing said digital image includes image quality correcting. 74. The one or more storage devices of claim 58, the method further comprising: (h) rotating said set of classifiers a third predetermined amount, applying the classifiers rotated by said third amount to said digital image at said first orientation, and determining a fourth level of match between said digital image at said first orientation and said classifiers rotated by said third amount; (f) comparing said fourth level of match with two or more of said first level match, said second level of match, and said third level of match; and (g) determining which of the two or more of the unrotated classifiers, and those rotated by the first amount, the second amount, and the third amount has a greater probability of matching the first orientation of the digital image based on which of the two or more of the first, second, third, and fourth levels of match, respectively, comprises a higher level of match. 75. The one or more storage devices of claim 74, wherein said first and second amounts comprise 90° opposite rotations of said set of classifiers from an initial orientation, and said third amount comprises a 180° rotation of said set of classifiers. 76. the one or more storage devices of claim 58, wherein said first and second amounts comprise 90° opposite rotations of said set of classifiers from an initial orientation.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (134)
Suzuki Shinichi (Tokyo JPX) Yasukawa Seiichi (Kawasaki JPX) Sato Toshihiro (Yokohama JPX) Narisawa Tsutomu (Saitama JPX), A photo taking apparatus capable of making a photograph with flash by a flash device.
White,Timothy J.; Blanco,Felix; Gerard,Michael J.; Leem,Yojin; Kurtenbach,Thomas J.; Christoffel,Douglas W.; Delong,Kevin R.; Smith,Craig M., Apparatus and method for processing digital images having eye color defects.
Hutcheson Timothy L. (Los Gatos CA) Or Wilson (Santa Clara CA) Narayanan Venkatesh (Fremont CA) Mohan Subramaniam (Sunnyvale CA) Wohlmut Peter G. (Saratoga CA) Srinivasan Ramanujam (Sunnyvale CA) Hun, Apparatus for generating a feature matrix based on normalized out-class and in-class variation matrices.
Benati Paul J. (Webster NY) Gray Robert T. (Rochester NY) Cosgrove Patrick A. (Honeoye Falls NY), Automated detection and correction of eye color defects due to flash illumination.
Eleftheriadis Alexandros ; Jacquin Arnaud Eric, Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video.
Harshaw Robert C. (Dallas TX) Burkey Ronald S. (Dallas TX) Doell James T. (Dallas TX) Keith Dennis G. (Dallas TX), Computerized checklist with predetermined sequences of sublists which automatically returns to skipped checklists.
Buhr John D. ; Goodwin Robert M. ; Koeng Frederick R. ; Rivera Jose E., Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing.
Steffens Johannes Bernhard ; Elagin Egor Valerievich ; Nocera Luciano Pasquale Agostino ; Maurer Thomas ; Neven Hartmut, Face recognition from video images.
Poggio Tomaso ; Beymer David ; Jones Michael ; Vetter Thomas,DEX, Image compression by pointwise prototype correspondence using shape and texture information.
Lu Daozheng (Buffalo Grove IL) Kiewit David A. (Palm Harbor FL) Zhang Jia (Mundelein IL), Market research method and system for collecting retail store and shopper market research data.
Brogliatti, Barbara Spencer; Grakal, Christopher; Janney, Lisa A.; O'Neil, Marisa B.; Smith, Thomas G., Method and apparatus for archiving in and retrieving images from a digital image library.
Anderson, Eric C.; Bernstein, John D.; Pavely, John F.; Alsing, Carl J., Method and apparatus for defining a panning and zooming path across a still image during movie creation.
Bedell Jeffrey L. (Arlington MA) Cockroft Gregory (Santa Clara CA) Peters Eric C. (Carlisle MA) Warner William J. (Weston MA), Method and apparatus for manipulating digital video data.
Fujio Noguchi ; Kazuhiko Akaike JP; Setsuko Watanabe Blaszkowski ; Noriko Kotabe GB; Takashi Otani JP; Tadashi Kajiwara, Method and apparatus for providing favorite station and programming information in a multiple station broadcast system.
Tal Peter (53 Driftwood Dr. Port Washington NY 11050), Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system uti.
Steinberg,Eran; Prilutsky,Yury; Corcoran,Peter; Bigioi,Petronel, Method of improving orientation and color balance of digital images using face detection information.
Corcoran,Peter; Steinberg,Eran; Petrescu,Stefan; Drimbarean,Alexandru; Nanu,Florin; Pososin,Alexei; Biglol,Petronel, Real-time face tracking in a digital image acquisition device.
Ianculescu,Mihai; Bigioi,Petronel; Gangea,Mihnea; Petrescu,Stefan; Corcoran,Peter; Steinberg,Eran, Real-time face tracking in a digital image acquisition device.
Maeda Yutaka (Kanagawa JPX) Kyoden Yasuhiro (Sagamihara JPX) Naruto Hirokazu (Higashiosaka JPX) Tanaka Yoshito (Sakai JPX) Shintani Dai (Sakai JPX) Nanba Katsuyuki (Osakasayama JPX), Still video camera having a printer capable of printing a photographed image in a plurality of printing modes.
Mashimo Yukio (Tokyo JA) Sakurada Nobuaki (Kanagawa JA) Ito Tadashi (Kanagawa JA) Ito Fumio (Kanagawa JA) Shinoda Nobuhiko (Tokyo JA), System for exposure measurement and/or focus detection by means of image senser.
Mashimo Yukio (Tokyo JPX) Sakurada Nobuaki (Kanagawa JPX) Ito Tadashi (Kanagawa JPX) Ito Fumio (Kanagawa JPX) Shinoda Nobuhiko (Tokyo JPX), System for exposure measurement and/or focus detection by means of image sensor.
Kuperstein Michael ; Kottas James A., System, method and application for the recognition, verification and similarity ranking of facial or other object patterns.
Kojima Kazuaki (Nagaokakyo JPX) Kuno Tetsuya (Nagaokakyo JPX) Sugiura Hiroaki (Nagaokakyo JPX) Yamada Takeshi (Nagaokakyo JPX), Video signal processor for detecting flesh tones in am image.
Jain Ramesh ; Horowitz Bradley ; Fuller Charles E. ; Gupta Amarnath ; Bach Jeffrey R. ; Shu Chiao-fe, Visual image database search engine which allows for different schema.
Ciuc, Mihai; Capata, Adrian; Mocanu, Valentin; Pososin, Alexei; Florea, Corneliu; Corcoran, Peter, Automatic face and skin beautification using face detection.
Ciuc, Mihai; Capata, Adrian; Mocanu, Valentin; Pososin, Alexei; Florea, Corneliu; Corcoran, Peter, Automatic face and skin beautification using face detection.
Ciuc, Mihai; Capata, Adrian; Mocanu, Valentin; Pososin, Alexei; Florea, Corneliu; Corcoran, Peter, Automatic face and skin beautification using face detection.
Corcoran, Peter; Barcovschi, Igor; Steinberg, Eran; Prilutsky, Yury; Bigioi, Petronel, Digital image processing using face detection and skin tone information.
Corcoran, Peter; Barcovschi, Igor; Steinberg, Eran; Prilutsky, Yury; Bigioi, Petronel, Digital image processing using face detection and skin tone information.
Corcoran, Peter; Bigioi, Petronel; Stec, Piotr, Face and other object detection and tracking in off-center peripheral regions for nonlinear lens geometries.
Steinberg, Eran; Bigioi, Petronel; Corcoran, Peter; Gangea, Mihnea; Petrescu, Stefan Mirel; Vasiliu, Andrei; Costache, Gabriel; Drimbarean, Alexandru, Face searching and detection in a digital image acquisition device.
Steinberg, Eran; Bigioi, Petronel; Corcoran, Peter; Gangea, Mihnea; Petrescu, Stefan Mirel; Vasiliu, Andrei; Costache, Gabriel; Drimbarean, Alexandru, Face searching and detection in a digital image acquisition device.
Steinberg, Eran; Bigioi, Petronel; Corcoran, Peter; Gangea, Mihnea; Petrescu, Stefan Mirel; Vasiliu, Andrei; Costache, Gabriel; Drimbarean, Alexandru, Face searching and detection in a digital image acquisition device.
Okada, Miyuki; Suzuki, Yoshihiro, Imaging apparatus, processing method of the apparatus making computer execute the methods of selecting face search directions specific to a display mode and capture mode of operation.
Steinberg, Eran; Corcoran, Peter; Prilutsky, Yury, Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts.
Steinberg, Eran; Prilutsky, Yury; Pososin, Alexei; Bigioi, Petronel; Zamfir, Adrian; Drimbarean, Alexandru; Corcoran, Peter, Method of gathering visual meta data using a reference image.
Uppuluri, Avinash; Morales, Celia; Chakravarthula, Hari; Mehra, Sumat; Paoletti, Tomaso, Methods and systems for ergonomic feedback using an image analysis module.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.