Image classification and information retrieval over wireless digital networks and the internet
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06F-017/30
출원번호
US-0203749
(2016-07-06)
등록번호
US-9798922
(2017-10-24)
발명자
/ 주소
Myers, Charles A.
Shah, Alex
출원인 / 주소
AVIGILON PATENT HOLDING 1 CORPORATION
대리인 / 주소
Baker & Hostetler LLP
인용정보
피인용 횟수 :
0인용 특허 :
65
초록▼
A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among t
A method and system for matching an unknown facial image of an individual with an image of a celebrity using facial recognition techniques and human perception is disclosed herein. The invention provides a internet hosted system to find, compare, contrast and identify similar characteristics among two or more individuals using a digital camera, cellular telephone camera, wireless device for the purpose of returning information regarding similar faces to the user The system features classification of unknown facial images from a variety of internet accessible sources, including mobile phones, wireless camera-enabled devices, images obtained from digital cameras or scanners that are uploaded from PCs, third-party applications and databases. Once classified, the matching person's name, image and associated meta-data is sent back to the user. The method and system uses human perception techniques to weight the feature vectors.
대표청구항▼
1. A method for matching an unknown facial image with a known facial image, the method comprising: receiving the unknown facial image at an image classification server;processing the unknown facial image at the image classification server to create a first set of variables that represent one or more
1. A method for matching an unknown facial image with a known facial image, the method comprising: receiving the unknown facial image at an image classification server;processing the unknown facial image at the image classification server to create a first set of variables that represent one or more features of the unknown facial image;comparing the first set of variables to a second set of variables that represent one or more features of facial images among a plurality of known facial images;matching the first set of variables to a third set of variables that represent one or more features of a second known facial image in the second set of variables to create a matched set of variables that represent matching features between the unknown facial image and the second known facial image;determining weights to be assigned to one or more of the variables in the matched set of variables, wherein the weights are based on a human perception rating; andtransmitting the second known facial image and a perception value based at least in part on the weights. 2. The method according to claim 1, wherein the first set of variables and each of the second set of variables are based on at least one of: a facial expression, a hair style, a hair color, a facial pose, an eye color, a texture of the face, a color of the face, and facial hair. 3. The method according to claim 1, wherein the image classification server comprises at least one of: an input module, a transmission engine, facial recognition software, an input feed, a feature vector database, a perception engine, and an output module. 4. The method according to claim 1, wherein the perception value ranges from 0% to 100%. 5. A method for matching an unknown facial image with a known facial image, the method comprising: receiving one or more unknown facial images of a person from a video camera at an image classification server;processing the one or more unknown facial images at the image classification server to create one or more sets of variables that represent one or more features of the one or more unknown facial images;when there are two or more sets of variables, combining the two or more sets of variables into a first set of variables that represent one or more features of the one or more unknown facial images;comparing the first set of variables to a second set of variables that represent one or more features of facial images among a plurality of known facial images stored in a database;matching the first set of variables to a third set of variables that represent one or more features of a second known facial image in the second set of variables to create a matched set of variables that represent matching features between the one or more unknown facial images and the second known facial image;determining a perception value to be assigned to one or more of the variables in the matched set of variables, wherein the perception value is based on a human perception rating;transmitting the second known facial image and the perception value to a computing device; andin response to receiving the second known facial image and the perception value from the computing device, transmitting the second known facial image and a confidence value of the second known facial image, based at least in part on the received perception value, to a video surveillance system. 6. The method of claim 5, further comprising adding the one or more unknown facial images to the database and adding the first set of variables that represent one or more features of the one or more unknown facial images to the database. 7. The method according to claim 5, wherein the first set of variables and each of the second set of variables are based on at least one of: a facial expression, a hair style, a hair color, a facial pose, an eye color, a texture of the face, a color of the face, and facial hair. 8. The method according to claim 7, wherein the second set of variables that represent one or more features of facial images among a plurality of known facial images further comprise at least one of: a distance between eyes, a distance between a center of the eyes to a chin, a size and a shape of eyebrows. 9. The method according to claim 5, wherein the image classification server comprises at least one of: an input module, a transmission engine, facial recognition software, an input feed, a feature vector database, a perception engine, and an output module. 10. The method according to claim 5, wherein the perception value ranges from 0% to 100%. 11. A method for matching an unknown facial image with a known facial image, the method comprising: receiving the unknown facial image at an image classification server;processing the unknown facial image at the image classification server to create a first set of variables that represent one or more features of the unknown facial image;comparing the first set of variables to a second set of variables that represent one or more features of facial images among a plurality of known facial images;matching the first set of variables to a third set of variables that represent one or more features of a second known facial image in the second set of variables to create a matched set of variables that represent matching features between the unknown facial image and the second known facial image;determining weights to be assigned to one or more of the variables in the matched set of variables, wherein the weights are based on a human perception rating;determining a perception value based at least in part on the weights; andtransmitting the second known facial image based on the perception value. 12. The method according to claim 11, wherein the first set of variables and each of the second set of variables are based on at least one of: a hair style, a hair color, a facial pose, an eye color, a texture of the face, a color of the face, and facial hair. 13. The method according to claim 11, wherein the image classification server comprises at least one of: an input module, a transmission engine, facial recognition software, an input feed, a feature vector database, a perception engine, and an output module. 14. The method according to claim 11, wherein the perception value ranges from 0% to 100%. 15. The method according to claim 11, further comprising transmitting the perception value with the second known facial image. 16. A non-transitory computer-readable medium containing instructions, which, when executed on a processor is configured to perform an operation for matching an unknown facial image with a known facial image, comprising: receiving the unknown facial image at an image classification server;processing the unknown facial image at the image classification server to create a first set of variables that represent one or more features of the unknown facial image;comparing the first set of variables to a second set of variables that represent one or more features of facial images among a plurality of known facial images;matching the first set of variables to a third set of variables that represent one or more features of a second known facial image in the second set of variables to create a matched set of variables that represent matching features between the unknown facial image and the second known facial image;determining weights to be assigned to one or more of the variables in the matched set of variables, wherein the weights are based on a human perception rating;determining a perception value based at least in part on the weights; andtransmitting the second known facial image based on the perception value. 17. The non-transitory computer-readable medium according to claim 16, wherein the first set of variables and each of the second set of variables are based on at least one of: a facial expression, a hair style, a hair color, a facial pose, an eye color, a texture of the face, a color of the face, and facial hair. 18. The non-transitory computer-readable medium according to claim 16, wherein the image classification server comprises at least one of: an input module, a transmission engine, facial recognition software, an input feed, a feature vector database, a perception engine, and an output module. 19. The non-transitory computer-readable medium according to claim 16, wherein the perception value ranges from 0% to 100%. 20. The non-transitory computer-readable medium according to claim 16, further comprising transmitting the perception value with the second known facial image. 21. A method for matching an unknown facial image with a known facial image, the method comprising: receiving the unknown facial image at an image classification server;processing the unknown facial image at the image classification server to create a first set of variables that represent one or more features of the unknown facial image;comparing the first set of variables to a second set of variables that represent one or more features of facial images among a plurality of known facial images;matching the first set of variables to a third set of variables that represent one or more features of a second known facial image in the second set of variables to create a matched set of variables that represent matching features between the unknown facial image and the second known facial image;determining a confidence value assigned to one or more of the variables in the matched set of variables; andtransmitting the second known facial image and the confidence value. 22. The method according to claim 21, wherein determining the confidence value further comprises: determining a perception value assigned to one or more of the variables in the matched set of variables;transmitting the second known facial image and the perception value to a computing device; andin response to receiving the second known facial image and the perception value from the computing device, determining a confidence value of the second known facial image, based at least in part on the received perception value. 23. The method according to claim 22, wherein determining the perception value further comprises: determining weights to be assigned to one or more of the variables in the matched set of variables, wherein the weights are based on a human perception rating. 24. The method according to claim 21, wherein receiving the unknown facial image further comprises: receiving the unknown facial image of a person from a video camera. 25. The method according to claim 21, further comprises: transmitting the second known facial image based on the confidence value to a video surveillance system.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (65)
Himmel David P. (Dallas TX), Apparatus and a method for storage and retrieval of image patterns.
Steinberg, Eran; Corcoran, Peter; Prilutsky, Yury; Bigioi, Petronel; Ciuc, Mihai; Ciurel, Stefanita; Vertran, Constantin, Classification and organization of consumer digital images using workflow, and face detection and recognition.
Matsuo,Hideaki; Imagawa,Kazuyuki; Takata,Yuji; Iwasa,Katsuhiro; Eshima,Toshirou; Baba,Naruatsu, Face detection device, face pose detection device, partial image extraction device, and methods for said devices.
DeBan Abdou F. (Dayton OH) Xu Tianning (Dayton OH) Tumey David M. (Huber Heights OH) Arndt Craig M. (Dayton OH), Identification and verification system.
Haupt, Gordon T.; Freeman, J. Andrew; Fleischer, Stephen D.; Vallone, Robert P.; Russell, Stephen G.; Frederick, Timothy B., Interactive system for recognition analysis of multiple streams of video.
Nigro, Richard; Kraemer, Werner; Holicky, Robert J.; Mallinson, Richard B., Internet-based modeling kiosk and method for fitting and selling prescription eyeglasses.
Tal Peter (53 Driftwood Dr. Port Washington NY 11050), Method and apparatus for uniquely identifying individuals by particular physical characteristics and security system uti.
Bortolussi Jay F. ; Cusack ; Jr. Francis J. ; Ehn Dennis C. ; Kuzeja Thomas M. ; Saulnier Michael S., Real-time facial recognition and verification system.
Bolle, Rudolf Maarten; Connell, Jonathan H.; Ratha, Nalini K., System and method for distorting a biometric for transactions with enhanced security and privacy.
Gokturk, Salih Burak; Anguelov, Dragomir; Vanhoucke, Vincent; Lee, Kuang Chih; Vu, Diem; Yang, Danny; Shah, Munjal; Khan, Azhar, System and method for enabling the use of captured images through recognition.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.