IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0284473
(2011-10-28)
|
등록번호 |
US-8811747
(2014-08-19)
|
발명자
/ 주소 |
|
출원인 / 주소 |
- Intellectual Ventures Fund 83 LLC
|
인용정보 |
피인용 횟수 :
0 인용 특허 :
12 |
초록
▼
A computer implemented method for identifying one or more individual regions in a digital image that each include a human face, padding each of the one or more individual regions to form individual padded regions, and digitally defining at least one combined padded region each comprising one or more
A computer implemented method for identifying one or more individual regions in a digital image that each include a human face, padding each of the one or more individual regions to form individual padded regions, and digitally defining at least one combined padded region each comprising one or more of the individual padded regions that overlap by a preselected amount.
대표청구항
▼
1. A method for modifying a digital image, the method comprising: identifying, by a computing device, two or more individual regions in a digital image that each include a face;evaluating the two or more individual regions by the computing device, wherein the evaluating the two or more individual re
1. A method for modifying a digital image, the method comprising: identifying, by a computing device, two or more individual regions in a digital image that each include a face;evaluating the two or more individual regions by the computing device, wherein the evaluating the two or more individual regions comprises at least one of: assigning a fitness score to each of the two or more individual regions; orassigning a class to each of the two or more individual regions;padding, by the computing device, each of the two or more individual regions to form individual padded regions; anddefining, by the computing device, a combined padded region based on at least one of the fitness score of each of the two or more individual regions or the class of each of the two or more individual regions, wherein the combined padded region comprises the individual padded regions, wherein the evaluating the two or more individual regions comprises assigning a fitness score to each of the two or more individual regions based on size, blink status, gaze direction, expression, pose, occlusion, sharpness, exposure, or contrast of the face in each of the two or more individual regions, and wherein the defining the combined padded region comprises defining a first combined padded region including only those of the two or more individual regions having a fitness score above a preselected threshold. 2. The method of claim 1, wherein the defining the combined padded region comprises defining the combined padded region by combining only overlapping ones of the individual padded regions. 3. The method of claim 2, wherein the defining the combined padded region comprises defining the combined padded region by combining only overlapping ones of the individual padded regions having an overlap amount that is greater than a preselected threshold. 4. The method of claim 1, further comprising defining an additional combined padded region comprising one or more of the individual padded regions that are not yet part of the first combined padded region having a fitness score above a second preselected threshold. 5. The method of claim 1, wherein the evaluating the two or more individual regions comprises assigning a fitness score to each of the two or more individual regions based on size, blink status, gaze direction, expression, pose, occlusion, sharpness, exposure, or contrast of the face in each of the two or more individual regions, and wherein the defining the combined padded region comprises defining the combined padded region by combining only those of the two or more individual regions having fitness scores deviating from each other less than a predetermined amount. 6. The method of claim 1, wherein the evaluating the two or more individual regions comprises assigning a class to each of the two or more individual regions based on size, blink status, gaze direction, expression, pose, occlusion, sharpness, exposure, or contrast of the face in each of the two or more individual regions, and wherein the defining the combined padded region comprises defining the combined padded region by combining only those of the two or more individual regions having a common classification. 7. The method of claim 1, wherein the evaluating the two or more individual regions comprises assigning a class to each of the two or more individual regions based on age, race, gender, identity, facial hair, glasses type, hair type, smoking action, drinking action, eating action, facial gesture, makeup status, mask status, scar status, tattoo status, or hat status of the face in each of the two or more individual regions and type of clothing, uniform, neckwear, or jewelry near the face in each of the two or more individual regions, and wherein the defining the combined padded region comprises defining the combined padded region by combining only those of the two or more individual regions having a common classification. 8. The method of claim 1, wherein the identifying the two or more individual regions comprises identifying individual regions in the digital image that comprise a human face and upper torso, a human face and torso, or a human face and full body. 9. The method of claim 8, wherein the identifying the two or more individual regions further comprises identifying body gestures of the human body, and wherein the defining the combined padded region comprises defining a combined region comprising individual regions each having similar body gestures. 10. The method of claim 1, wherein the identifying the two or more individual regions comprises identifying individual regions in the digital image that comprise an animal face, an animal face and torso, or an animal face and full body. 11. The method of claim 10, wherein the identifying the two or more individual regions further comprises identifying body gestures of an animal body associated with the animal face, and wherein the defining the combined padded region comprises defining a combined region comprising individual regions each having similar gestures of the animal body. 12. The method of claim 11, further comprising defining additional combined padded regions comprising two or more highest scoring ones of individual padded regions that are not yet part of a combined padded region. 13. The method of claim 1, wherein the defining the two or more individual regions comprises defining a combined padded region comprising only socially related ones of the individual padded regions. 14. The method of claim 13, wherein the socially related ones of the individual padded regions are selected according to parent-child relationship, family relationship, community, culture, work group, sports team, or religious affiliation. 15. A non-transitory computer-readable medium having instructions stored thereon that, upon execution by a processing device, cause the processing device to: identify two or more individual regions in a digital image that each include a face;evaluate the two or more individual regions, wherein the evaluating the two or more individual regions comprises at least one of: assigning a fitness score to each of the two or more individual regions; orassigning a class to each of the two or more individual regions;pad each of the two or more individual regions to form individual padded regions; anddefine a combined padded region based on at least one of the fitness score of each of the two or more individual regions or the class of each of the two or more individual regions, wherein the combined padded region comprises the individual padded regions,wherein the evaluating the two or more individual regions comprises assigning a fitness score to each of the two or more individual regions based on size, blink status, gaze direction, expression, pose, occlusion, sharpness, exposure, or contrast of the face in each of the two or more individual regions, and wherein the defining the combined padded region comprises defining a first combined padded region including only those of the two or more individual regions having a fitness score above a preselected threshold. 16. The non-transitory computer-readable medium of claim 15, wherein the defining the combined padded region comprises defining the combined padded region by combining only those of the two or more individual regions having fitness scores deviating from each other less than a predetermined amount. 17. The non-transitory computer-readable medium of claim 15, wherein the defining the combined padded region comprises defining the combined padded region by combining only those of the two or more individual regions having a common classification. 18. A device comprising: a processing system configured to: identify two or more individual regions in a digital image that each include a face;evaluate the two or more individual regions, wherein the evaluating the two or more individual regions comprises at least one of: assigning a fitness score to each of the two or more individual regions; orassigning a class to each of the two or more individual regions;pad each of the two or more individual regions to form individual padded regions; anddefine a combined padded region based on at least one of the fitness score of each of the two or more individual regions or the class of each of the two or more individual regions, wherein the combined padded region comprises the individual padded regions; anda display communicatively coupled to the processing system and configured to display the combined padded region,wherein the evaluation of two or more individual regions comprises assignment of a fitness score to each of the two or more individual regions based on size, blink status, gaze direction, expression, pose, occlusion, sharpness, exposure, or contrast of the face in each of the two or more individual regions, and wherein the definition of the combined padded region comprises defining a first combined padded region including only those of the two or more individual regions having a fitness score above a preselected threshold.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.