Classification system for consumer digital images using workflow and user interface modules, and face detection and recognition
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06T-001/00
G06T-007/00
출원번호
UP-0764336
(2004-01-22)
등록번호
US-7558408
(2009-07-15)
발명자
/ 주소
Steinberg, Eran
Corcoran, Peter
Prilutsky, Yury
Bigioi, Petronel
Ciuc, Mihai
Ciurel, Stefanita
Vertran, Constantin
출원인 / 주소
FotoNation Vision Limited
대리인 / 주소
Smith, Andrew V.
인용정보
피인용 횟수 :
55인용 특허 :
80
초록▼
A processor-based system operating according to digitally-embedded programming instructions includes a face detection module for identifying face regions within digital images. A normalization module generates a normalized version of the face region. A face recognition module extracts a set of face
A processor-based system operating according to digitally-embedded programming instructions includes a face detection module for identifying face regions within digital images. A normalization module generates a normalized version of the face region. A face recognition module extracts a set of face classifier parameter values from the normalized face region that are referred to as a faceprint. A workflow module compares the extracted faceprint to a database of archived faceprints previously determined to correspond to known identities. The workflow module determines based on the comparing whether the new faceprint corresponds to any of the known identities, and associates the new faceprint and normalized face region with a new or known identity within a database. A database module serves to archive data corresponding to the new faceprint and its associated parent image according to the associating by the workflow module within one or more digital data storage media. A set of user interface modules serve to obtain user input in the classifying of faceprints and their associated normalized face regions and parent images.
대표청구항▼
What is claimed is: 1. A processor-based image acquisition and processing system comprising a lens and image sensor for acquiring a digital image, a processor, and one or more digital data storage media having digitally-embedded programming instruction therein, the processor operating according to
What is claimed is: 1. A processor-based image acquisition and processing system comprising a lens and image sensor for acquiring a digital image, a processor, and one or more digital data storage media having digitally-embedded programming instruction therein, the processor operating according to the digitally-embedded programming instructions and communicating with the one or more digital data storage media for classifying and archiving images including face regions that are acquired with an image acquisition device, the programming instructions comprising: a face detection module for identifying a group of pixels corresponding to a face region within digital image data acquired by the acquisition device; a normalization module for generating a normalized face region from said face region; face recognition module for extracting a set of values of face classifier parameters from said normalized face region, said set of face classifier parameter values being collectively known as a faceprint associated with said normalized face region; a workflow module for comparing said extracted faceprint to a database of archived faceprints previously determined to correspond to one or more known identities, and for determining based on the comparing whether a new faceprint corresponds to any of the one or more known identities, and for associating the new faceprint and normalized face region from which said faceprint is derived with a new or known identity within a database comprising other data corresponding to the archived faceprints and associated parent images for performing further comparisons with further faceprints; a database module for archiving the data according to the associating by the workflow module within one or more digital data storage media; and a set of user interface modules for obtaining user input in the classifying of faceprints and their associated normalized face regions and parent images; and wherein one or more archived faceprints have been previously determined to correspond to the one or more known identities, and the comparing by the workflow module comprises determining proximities of the values of the face classifier parameters of the new face print image with values corresponding to the one or more archived faceprints, and wherein the determining by the workflow module comprises a further confirmation to determine whether the new faceprint corresponds to a known identity when comparisons of the face classifier parameter values of a first faceprint with multiple archived faceprints corresponding to a same known identity result in at least one determination of an identity match and at least one determination that the identities do not match. 2. The system of claim 1, wherein the archiving further for enabling further comparisons with further faceprints and for recalling the faceprints and their associated normalized face regions and parent images. 3. The system of claim 1, wherein the archiving further comprises grouping the new faceprint with a new or prior face class defined by sets of face classifier parameter boundary values corresponding to the new or known identity. 4. The system of claim 1, wherein the identifying by the face detection module further comprises presenting a thumbnail representation of a face candidate region, and receiving user input relevant to the identifying. 5. The system of claim 1, wherein the comparing by the face recognition module further comprises presenting to the user one or more previously-archived faceprints, or thumbnails thereof, previously determined to correspond to one or more known identities, and receiving user input relevant to the comparison with the new faceprint. 6. The system of claim 1, wherein the programming instructions further comprise providing interactive access to the user of data associated with the identities, faceprints, associated normalized face regions or parent images acquired with a digital camera, or combinations thereof. 7. The system of claim 6, wherein the data comprises identity data, relationship data, personal data, group membership data, events and occasions data, location-based data, image category data or data stored within image metadata, or combinations thereof. 8. The system of claim 6, wherein the data comprises image data, identity data or face recognition data, or combinations thereof. 9. The system of claim 1, wherein the programming instructions comprise instructions for receiving data management editing from the user regarding image data, identity data or face recognition data, or combinations thereof. 10. The system of claim 1, wherein the programming instructions comprise instructions for receiving data management editing from the user regarding statistical thresholds utilized in the comparing by the face recognition module. 11. The system of claim 10, wherein the programming instructions further comprise instructions for receiving data management editing from the user regarding automated learning or adaptive recognition enhancement processes, or both. 12. The system of claim 1, wherein the programming instructions comprise instructions for providing utilization access by the user of added value services tools. 13. The system of claim 12, wherein the added value services tools comprise slideshow generation tools, print tools, web publisher tools, face detection tools, face recognition tools, or image enhancement tools based on the presence and location of faces in an image, or combinations thereof. 14. The system of claim 1, wherein the comparing by the face recognition module further comprises receiving and processing user input regarding whether the faceprint and associated normalized face region corresponds to a known identity or matches a previously-archived faceprint, or both. 15. The system of claim 1, wherein the database module further for associating the new face print image with a new or known identity by grouping the new face print image with a new or prior face class defined by range values of one or more face classifier parameters for performing further comparisons with further faceprints and for recalling the faceprints. 16. The system of claim 1, wherein the programming instructions are stored on or accessible by a stand alone processor-based device configured for receiving raw image data from a digital camera, and the device being coupled with or including user interface hardware, and upon which the classifying is performed. 17. The system of claim 1, wherein the programming instructions are stored at least in part on an embedded appliance for performing some image classifying-related processing prior to outputting processed image data to a further processor-based device upon which the classifying is further performed. 18. The system of claim 17, wherein the embedded appliance comprises a digital camera. 19. The system of claim 18, wherein the digital camera comprises a dedicated digital camera or a camera-capable handheld pda or phone, or a combination thereof. 20. The system of claim 1, wherein the programming instructions are stored at least in part on a processor-based device connected to a network for performing some image classifying-related processing on the device prior to outputting processed data to a back-end server upon which the classifying is further performed. 21. The system of claim 1, wherein the identifying by the face detection module or the comparing by the face recognition module, or both, comprise receiving and utilizing user input confirmation. 22. The system of claim 1, wherein the identifying by the face detection module or the comparing by the face recognition module, or both, are configured for auto-processing subject to selective disablement of the auto-processing by a user. 23. The system of claim 1, wherein the identifying by the face detection module applies automatic face region identification when a detection probability is calculated to be above a detection probability threshold or the comparing by the face recognition module applies automatic identity recognition when a matching probability with a prior faceprint is calculated to be above a matching probability threshold, or both. 24. The system of claim 23, wherein the detection probability threshold or the matching probability threshold, or both, are adjustable. 25. The system of claim 24, wherein the detection threshold or the matching threshold, or both, are adjustable by a user, a manufacturer, or an adaptive learning program of the system, or combinations thereof. 26. The system of claim 1, wherein the programming instructions are stored on or accessible by processor-based components within a digital camera upon which the classifying is performed. 27. One or more processor-readable digital data storage media having programming instructions embedded therein for programming a processor to classify images including face regions that are acquired with an image acquisition device, the programming instructions comprising: a workflow module providing for the automatic or semiautomatic processing of identified face regions within digital images from which normalized face classifier parameter values are extracted and collectively referred to as a faceprint, the processing comprising: comparing said extracted faceprint to a database of archived faceprints previously determined to correspond to one or more known identities, determining based on the comparing whether a new faceprint corresponds to any of the one or more known identities, and associating the new faceprint and a normalized face region from which said faceprint is derived with a new or known identity within a database comprising other data corresponding to the archived faceprints and associated parent images for performing further comparisons with further faceprints, to permit data corresponding to the new faceprint and its associated parent image to be archived according to the associating by the workflow module within one or more digital data storage media; and a set of user interface modules for obtaining user input in the classifying of faceprints and their associated normalized face regions and parent images; and wherein one or more archived faceprints have been previously determined to correspond to the one or more known identities, and the comparing by the workflow module comprises determining proximities of the values of the face classifier parameters of the new face print image with values corresponding to the one or more archived faceprints, and wherein the determining by the workflow module comprises a further confirmation to determine whether the new faceprint corresponds to a known identity when comparisons of the face classifier parameter values of a first faceprint with multiple archived faceprints corresponding to a same known identity result in at least one determination of an identity match and at least one determination that the identities do not match. 28. The system of claim 27, wherein the archiving further for enabling further comparisons with further faceprints and for recalling the faceprints and their associated normalized face regions and parent images. 29. The system of claim 27, wherein the archiving further comprises grouping the new faceprint with a new or prior face class defined by sets of face classifier parameter boundary values corresponding to the new or known identity. 30. The system of claim 27, wherein the programming instructions further comprise providing interactive access to the user of data associated with the identities, faceprints, associated normalized face regions or parent images acquired with a digital camera, or combinations thereof. 31. The system of claim 30, wherein the data comprises identity data, relationship data, personal data, group membership data, events and occasions data, location-based data, image category data or data stored within image metadata, or combinations thereof. 32. The system of claim 30, wherein the data comprises image data, identity data or face recognition data, or combinations thereof. 33. The system of claim 27, wherein the programming instructions comprise instructions for receiving data management editing from the user regarding image data, identity data or face recognition data, or combinations thereof. 34. The system of claim 27, wherein the programming instructions comprise instructions for receiving data management editing from the user regarding statistical thresholds utilized in the comparing. 35. The system of claim 34, wherein the programming instructions further comprise instructions for receiving data management editing from the user regarding automated learning or adaptive recognition enhancement processes, or both. 36. The system of claim 27, wherein the programming instructions comprise instructions for providing utilization access by the user of added value services tools. 37. The system of claim 36, wherein the added value services tools comprise slideshow generation tools, print tools, web publisher tools, face detection tools, face recognition tools, or image enhancement tools based on the presence and location of faces in an image, or combinations thereof. 38. The system of claim 27, wherein the programming instructions are stored on or accessible by a stand alone processor-based device configured for receiving raw image data from a digital camera, and the device being coupled with or including user interface hardware, and upon which the classifying is performed. 39. The system of claim 27, wherein the programming instructions are stored at least in part on an embedded appliance for performing some image classifying-related processing prior to outputting processed image data to a further processor-based device upon which the classifying is further performed. 40. The system of claim 39, wherein the embedded appliance comprises a digital camera. 41. The system of claim 40, wherein the digital camera comprises a dedicated digital camera or a camera-capable handheld pda or phone, or a combination thereof. 42. The system of claim 27, wherein the programming instructions are stored at least in part on a processor-based device connected to a network for performing some image classifying-related processing on the device prior to outputting processed data to a back-end server upon which the classifying is further performed. 43. The system of claim 27, wherein the programming instructions are stored on or accessible by processor-based components within a digital camera upon which the classifying is performed. 44. A method for classifying and archiving images including face regions that are acquired with an image acquisition device, comprising, using a processor to perform the steps of: (a) generating a normalized face region from an identified face region within digital image data acquired by the acquisition device; (b) automatically extracting a set of face classifier parameter values, collectively referred to as a faceprint, from the normalized face region; (c) automatically comparing said extracted faceprint to a database of archived faceprints previously determined to correspond to one or more known identities; (d) determining based on the comparing whether a new faceprint corresponds to any of the one or more known identities; (e) associating the new faceprint with a new or known identity within a database comprising other data corresponding to the archived faceprints and associated parent images for performing further comparisons with further faceprints, to permit data corresponding to the new faceprint and its associated parent image to be archived according to the associating by the workflow module within one or more digital data storage media; and (f) obtaining user input in the classifying of faceprints and their associated normalized face regions and parent images; and wherein one or more archived faceprints have been previously determined to correspond to the one or more known identities, and the comparing comprises determining proximities of the values of the face classifier parameters of the new face print image with values corresponding to the one or more archived faceprints, and wherein the determining comprises a further confirmation to determine whether the new faceprint corresponds to a known identity when comparisons of the face classifier parameter values of a first faceprint with multiple archived faceprints corresponding to a same known identity result in at least one determination of all identity match and at least one determination that the identities do not match. 45. The method of claim 44, further comprising archiving the new faceprint and its associated parent image, according to the associating, within one or more digital data storage media. 46. The method of claim 45, wherein the archiving enables further comparisons with further faceprints and for recalling the faceprints and their associated normalized face regions and parent images. 47. The method of claim 45, wherein the archiving further comprises grouping the new faceprint with a new or prior face class defined by sets of face classifier parameter boundary values corresponding to the new or known identity. 48. The method of claim 44, further comprising interactively accessing data output associated with the identities, faceprints, associated normalized face regions or parent images acquired with a digital camera, or combinations thereof. 49. The method of claim 44, further comprising receiving data management editing input regarding image data, identity data or face recognition data, or combinations thereof. 50. The method of claim 44, further comprising receiving data management editing from the user regarding statistical thresholds utilized in the comparing. 51. The method of claim 50, further comprising receiving data management editing input regarding automated learning or adaptive recognition enhancement processes, or both. 52. The method of claim 44, further comprising providing utilization access output of added value services tools. 53. The method of claim 44, wherein the extracting is performed automatically. 54. The method of claim 44, wherein the comparing is performed automatically. 55. The method of claim 54, wherein the extracting is performed automatically. 56. The method of claim 44, further comprising digitally organizing and selectively recalling said new and archived faceprints and the associated parent images. 57. The method of claim 44, further comprising determining that the face region has a particular pose aspect. 58. The method of claim 57, further comprising completing the recognition process by comparing and determining a match of face classifier parameters with a previously determined known identity sharing a similar pose aspect. 59. The method of claim 44, wherein the generating comprises pose normalizing the identified face region. 60. The system of claim 1, wherein the further confirmation comprises automatically determining that the new face print image corresponds to a know identity based on one or more geometric distance proximities being within a predetermined proximity threshold. 61. The one or more processor-readable digital data storage media of claim 27, wherein the further confirmation comprises automatically determining that the new face print image corresponds to a know identity based on one or more geometric distance proximities being within a predetermined proximity threshold. 62. The method of claim 44, wherein the further confirmation comprises automatically determining that the new face print image corresponds to a know identity based on one or more geometric distance proximities being within a predetermined proximity threshold.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (80)
Hiramatsu Tatsuo (Higashiosaka JPX), Auto focus circuit for video camera.
Buhr John D. ; Goodwin Robert M. ; Koeng Frederick R. ; Rivera Jose E., Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing.
Steffens Johannes Bernhard ; Elagin Egor Valerievich ; Nocera Luciano Pasquale Agostino ; Maurer Thomas ; Neven Hartmut, Face recognition from video images.
Poggio Tomaso ; Beymer David ; Jones Michael ; Vetter Thomas,DEX, Image compression by pointwise prototype correspondence using shape and texture information.
Yoichi Takaragi JP; Masanori Yamada JP; Yoshinobu Sato JP; Yasumichi Suzuki JP; Yasuhiro Yamada JP; Akiko Kanno JP; Yoshiki Uchida JP, Image processing apparatus and method for discriminating an original having a predetermined pattern.
Brogliatti, Barbara Spencer; Grakal, Christopher; Janney, Lisa A.; O'Neil, Marisa B.; Smith, Thomas G., Method and apparatus for archiving in and retrieving images from a digital image library.
Smith Joseph J. (2601 Knollwood Rd. Charlotte NC 28211) Moseley Thomas L. (9352 Pinewood St. Charlotte NC 28214), Portable electrolytic testing device for metals.
Mashimo Yukio (Tokyo JA) Sakurada Nobuaki (Kanagawa JA) Ito Tadashi (Kanagawa JA) Ito Fumio (Kanagawa JA) Shinoda Nobuhiko (Tokyo JA), System for exposure measurement and/or focus detection by means of image senser.
Mashimo Yukio (Tokyo JPX) Sakurada Nobuaki (Kanagawa JPX) Ito Tadashi (Kanagawa JPX) Ito Fumio (Kanagawa JPX) Shinoda Nobuhiko (Tokyo JPX), System for exposure measurement and/or focus detection by means of image sensor.
Lee,Kyunghee; Chung,Yongwha; Park,Chee Hang; Byun,Hyeran, System for registering and authenticating human face using support vector machines and method thereof.
Kojima Kazuaki (Nagaokakyo JPX) Kuno Tetsuya (Nagaokakyo JPX) Sugiura Hiroaki (Nagaokakyo JPX) Yamada Takeshi (Nagaokakyo JPX), Video signal processor for detecting flesh tones in am image.
Jain Ramesh ; Horowitz Bradley ; Fuller Charles E. ; Gupta Amarnath ; Bach Jeffrey R. ; Shu Chiao-fe, Visual image database search engine which allows for different schema.
Haupt, Gordon T.; Fleischer, Stephen D.; Vallone, Robert P.; Russell, Stephen G.; Frederick, Timothy B., Automated searching for probable matches in a video surveillance system.
Steinberg, Eran; Corcoran, Peter; Prilutsky, Yury; Bigioi, Petronel; Ciuc, Mihai; Ciurel, Stefanita; Verlan, Constantin, Classification and organization of consumer digital images using workflow, and face detection and recognition.
Steinberg, Eran; Corcoran, Peter; Prilutsky, Yury; Bigioi, Petronel; Ciuc, Mihai; Ciurel, Stefanita; Vertan, Constantin, Classification and organization of consumer digital images using workflow, and face detection and recognition.
Steinberg, Eran; Corcoran, Peter; Prilutsky, Yury; Bigioi, Petronel; Ciuc, Mihai; Ciurel, Stefanita; Vertan, Constantin, Classification and organization of consumer digital images using workflow, and face detection and recognition.
Steinberg, Eran; Corcoran, Peter; Prilutsky, Yury; Bigioi, Petronel; Ciuc, Mihai; Ciurel, Stefanita; Vertran, Constantin, Classification system for consumer digital images using automatic workflow and face detection and recognition.
el Kaliouby, Rana; Bender, Daniel Abraham; Kodra, Evan; Nowak, Oliver Ernst; Sadowsky, Richard Scott; Senechal, Thibaud; Turcot, Panu James, Collection of affect data from multiple mobile devices.
Corcoran, Peter; Bigioi, Petronel; Stec, Piotr, Face and other object detection and tracking in off-center peripheral regions for nonlinear lens geometries.
Haupt, Gordon T.; Freeman, J. Andrew; Fleischer, Stephen D.; Vallone, Robert P.; Russell, Stephen G.; Frederick, Timothy B., Interactive system for recognition analysis of multiple streams of video.
Kashef, Youssef; el Kaliouby, Rana; Osman, Ahmed Adel; Haering, Niels; Bhatkar, Viprali, Mental state analysis using heart rate collection based on video imagery.
Laaser, William T.; Huff, Gerald B.; Ferrell, Arien C. T.; Graves, Michael J., Technique for recognizing personal objects and accessing associated information.
Bender, Daniel; el Kaliouby, Rana; Picard, Rosalind Wright; Sadowsky, Richard Scott; Turcot, Panu James; Wilder-Smith, Oliver Orion, Using affect within a gaming context.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.