IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0111696
(2005-04-21)
|
등록번호 |
US-7403642
(2008-07-22)
|
발명자
/ 주소 |
- Zhang,Lei
- Li,Mingjing
- Ma,Wei Ying
- Sun,Yan Feng
- Hu,Yuxiao
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
67 인용 특허 :
1 |
초록
▼
Systems, engines, user interfaces, and methods allow a user to select a group of images, such as digital photographs, and assign to the group of images the name of a person who is represented in each of the images. The name is automatically propagated to the face of the person, each time the person'
Systems, engines, user interfaces, and methods allow a user to select a group of images, such as digital photographs, and assign to the group of images the name of a person who is represented in each of the images. The name is automatically propagated to the face of the person, each time the person's face occurs in an image. In one implementation, names and associations are shared between a browsing mode for viewing multiple images at once and a viewer mode, for viewing one image at a time. The browsing mode can provide a menu of candidate names for annotating a face in a single image of the viewer mode. Likewise, the viewer mode can provide annotated face information to the browser mode for facilitating name propagation. Identification of a person's face in multiple images can be accomplished not only by finding similarities in facial features but also by finding similarities in contextual features near the face in different images.
대표청구항
▼
The invention claimed is: 1. A method, comprising: selecting digital images; assigning a name to the digital images; and propagating the name from the digital images to a face of an individual represented in each of the digital images; wherein the propagating includes applying a similarity measure:
The invention claimed is: 1. A method, comprising: selecting digital images; assigning a name to the digital images; and propagating the name from the digital images to a face of an individual represented in each of the digital images; wherein the propagating includes applying a similarity measure: to define an objective function as a sum of similarities between multiple features of each pair of faces of the individual in different selected images, and to maximize the objective function in order to associate the name with the face in each image; and wherein applying the similarity measure includes integrating the multiple features into a Bayesian framework according to: where p(ΩI) and p(ΩE) are a priors, Δfi=(fi1-fi2), and p(Δfi|ΩI) and p(Δfi|ΩE) are the likelihoods of a given difference Δfi between the features. 2. The method as recited in claim 1, wherein the propagating includes defining an objective function as a sum of similarities between each pair of faces of the individual in different selected digital images; and maximizing the objective function to associate the name with the face in each digital image. 3. The method as recited in claim 1, further comprising improving a name propagation accuracy by using at least some previously named faces of individuals represented in the selected digital images. 4. The method as recited in claim 1, further comprising improving an annotation accuracy by using at least some digital images that have been previously associated with names. 5. The method as recited in claim 1, wherein the propagating includes defining an objective function as a sum of similarities between each pair of faces of the individual in different selected digital images, wherein each face includes a visual context of the image near the face; and maximizing the objective function to associate the name with the face in each digital image. 6. The method as recited in claim 1, further comprising sharing propagated names information between a browsing mode for viewing and selecting multiple of the digital images and a viewer mode for viewing single digital images and annotating a face in the single digital image. 7. The method as recited in claim 6, wherein the sharing includes sharing a named face from the viewer mode to the browsing mode. 8. The method as recited in claim 6, wherein the sharing includes associating names from the browsing mode with a menu of names for annotating a face in the viewer mode. 9. A face annotation engine, comprising: a selection engine for selecting multiple images, each image having at least a face of an individual person in common, wherein the multiple images are capable of including faces of multiple persons; a user interface for applying a name to the selected multiple images; and a name propagation engine for determining the face common to the selected multiple images and annotating the name to the face; wherein the name propagation engine includes a similarity measure engine to define an objective function as a sum of similarities between multiple features of each pair of faces of the individual in different selected images and to maximize the objective function in order to associate the name with the face in each image; and wherein the similarity measure engine integrates the multiple features into a Bayesian framework according to: where p(ΩI) and p(ΩE) are a priors, Δfi=(fi1-fi2), and p(Δfi|ΩI) and p(Δfi|ΩE) are the likelihoods of a given difference Δfi between the features. 10. The face annotation engine as recited in claim 9, further comprising a labeled faces list for use by a browsing mode manager and a viewer mode manager, wherein the browsing mode manager performs selection and annotation of a group of images and the viewer mode manager performs annotation of a face in asingle image. 11. The face annotation engine as recited in claim 9, wherein the name propagation engine includes a contextual features engine to associate multiple faces in different images with the same person based on a non-facial feature that is similar in the different images. 12. The face annotation engine as recited in claim 9, further comprising a list of previously input names and a menu generator to provide a menu of candidate names for annotating a face in an image. 13. The face annotation engine as recited in claim 9, further comprising: a similar face retriever to allow a user to search for similar faces by specifying either a face or a name and to annotate multiple faces in a batch manner. 14. The face annotation engine as recited in claim 9, wherein the name propagation engine improves a name propagation accuracy by using at least some previously named faces. 15. The face annotation engine as recited in claim 9, wherein the name propagation engine improves an annotation accuracy by using at least some images that have been previously associated with names. 16. A system, comprising: means for selecting a batch of digital images, wherein each image in the batch has at least a face of one person in common; means for providing a name for the batch; means for propagating the name to the face of the person in common; means for applying a similarity measure during the propagating to define an objective function as a sum of similarities between multiple features of each pair of faces of the individual in different selected images, and to maximize the objective function in order to associate the name with the face in each image; and means for integrating the multiple features into a Bayesian framework according to: where p(ΩI) and p(ΩE) are a priors, Δfi=(fi1-fi2), and p(Δfi|ΩI) and p(Δfi|ΩE) are the likelihoods of a given difference Δfi between the features. 17. The system as recited in claim 16, further comprising: means for identifying a face between multiple representations of the face in different images based on at least a feature of the face and a feature of a visual context near the face.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.