Systems and methods for persona identification using combined probability maps
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06K-009/48
H04N-007/15
H04N-007/14
G06K-009/62
G06K-009/46
G06T-007/11
G06T-007/143
G06T-007/149
G06T-007/194
출원번호
US-0231296
(2016-08-08)
등록번호
US-9740916
(2017-08-22)
발명자
/ 주소
Lin, Dennis
Francisco, Glenn
Nguyen, Quang
Dang, Long
출원인 / 주소
Personify Inc.
대리인 / 주소
Invention Mine LLC
인용정보
피인용 횟수 :
0인용 특허 :
107
초록▼
Disclosed herein are systems and methods for persona identification using combined probability maps. An embodiment takes the form of a method that includes obtaining at least one frame of pixel data; processing the at least one frame of pixel data to generate a hair-identification probability map; a
Disclosed herein are systems and methods for persona identification using combined probability maps. An embodiment takes the form of a method that includes obtaining at least one frame of pixel data; processing the at least one frame of pixel data to generate a hair-identification probability map; and generating a persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map.
대표청구항▼
1. A method comprising: obtaining at least one frame of pixel data;processing the at least one frame of pixel data to generate a hair-identification probability map at least in part by: identifying a plurality of pixel columns that cross an identified head contour; andfor each pixel column in the pl
1. A method comprising: obtaining at least one frame of pixel data;processing the at least one frame of pixel data to generate a hair-identification probability map at least in part by: identifying a plurality of pixel columns that cross an identified head contour; andfor each pixel column in the plurality of pixel columns: performing a color-based segmentation of the pixels in the pixel column into a foreground segment, a hair segment, and a background segment; andassigning the pixels in the hair segment an increased foreground-probability value in the hair-identification probability map; andgenerating a persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map. 2. The method of claim 1, further comprising converting the head contour into a multi-segment polygon that approximates the head contour, the multi-segment polygon being formed of multiple head-contour segments, wherein identifying the plurality of pixel columns that cross the identified head contour comprises identifying pixel columns that cross one of the head-contour segments. 3. The method of claim 1, wherein performing a color-based segmentation comprises performing a color-based segmentation using a clustering algorithm. 4. The method of claim 3, wherein the clustering algorithm is a k-means algorithm with k=3. 5. The method of claim 1, wherein performing the color-based segmentation of the pixels in a given pixel column into the foreground segment, the hair segment, and the background segment of the given pixel column comprises: identifying an average foreground-pixel color, an average hair-pixel color, and an average background-pixel color for the given pixel column; andidentifying the foreground segment, the hair segment, and the background segment of the given pixel column using a clustering algorithm to cluster the pixels in the given pixel column around the identified average foreground-pixel color, the identified average hair-pixel color, and the identified average background-pixel color for the given pixel column, respectively. 6. The method of claim 5, wherein: identifying the average foreground-pixel color for the given pixel column comprises identifying the average foreground-pixel color for the given pixel column based on a first set of pixels at an innermost end of the given pixel column;identifying the average hair-pixel color for the given pixel column comprises identifying the average hair-pixel color for the given pixel column based on a second set of pixels that includes a point where the given pixel column crosses the identified head contour; andidentifying the average background-pixel color for the given pixel column comprises identifying the average background-pixel color for the given pixel column based on a third set of pixels at an outermost end of the given pixel column. 7. The method of claim 1, further comprising, for each pixel column in the plurality of pixel columns: assigning the pixels in the foreground and background segments an equal probability of being in the foreground and being in the background in the hair-identification probability map. 8. The method of claim 1, wherein assigning the pixels in the hair segment an increased foreground-probability value in the hair-identification probability map comprises: assigning a first value to the pixels in the hair segment in the hair-identification probability map; andassigning a second value to the pixels in the foreground and background segments in the hair-identification probability map, wherein the first value corresponds to a higher probability of being a foreground pixel than does the second value. 9. The method of claim 1, further comprising processing the at least one frame of pixel data to generate at least one additional probability map, wherein generating the persona image by extracting pixels from the at least one frame of pixel data is further based on the generated at least one additional probability map. 10. The method of claim 9, wherein: obtaining the at least one frame of pixel data comprises obtaining the at least one frame of pixel data and corresponding image depth data; andprocessing the at least one frame of pixel data to generate the at least one additional probability map comprises processing the at least one frame of pixel data and the corresponding image depth data to generate at least one of the at least one additional probability maps. 11. The method of claim 9, further comprising combining the hair-identification probability map and the at least one additional probability map to obtain an aggregate persona probability map, wherein generating the persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map and at least in part on the generated at least one additional probability map comprises generating the persona image by extracting pixels from the at least one frame of pixel data based on the aggregate persona probability map. 12. The method of claim 1, further comprising processing the at least one frame of pixel data to generate at least one additional probability map, wherein generating the persona image by extracting pixels from the at least one frame of pixel data is further based at least in part on the generated at least one additional probability map. 13. The method of claim 12, wherein: obtaining the at least one frame of pixel data comprises obtaining the at least one frame of pixel data and corresponding image depth data; andprocessing the at least one frame of pixel data to generate the at least one additional probability map comprises processing the at least one frame of pixel data and the corresponding image depth data to generate at least one of the at least one additional probability maps. 14. The method of claim 12, further comprising combining the hair-identification probability map and the at least one additional probability map to obtain an aggregate persona probability map, wherein generating the persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map and at least in part on the generated at least one additional probability map comprises generating the persona image by extracting pixels from the at least one frame of pixel data based on the aggregate persona probability map. 15. An apparatus comprising: a hair-identification module that is configured to generate a hair-identification probability map based on at least one frame of pixel data at least in part by: identifying a plurality of pixel columns that cross an identified head contour; andfor each pixel column in the plurality of pixel columns: performing a color-based segmentation of the pixels in the pixel column into a foreground segment, a hair segment, and a background segment; andassigning the pixels in the hair segment an increased foreground-probability value in the hair-identification probability map; anda persona extraction module configured to generate a persona image by extracting pixels from at least one frame of pixel data based at least in part on the generated hair-identification probability map. 16. The apparatus of claim 15, further comprising a foreground-background module configured to generate a foreground-background map based on image depth data corresponding to the at least one frame of pixel data, wherein the persona extraction module is configured to generate the persona image by extracting pixels from the at least one frame of pixel data based also on the generated foreground-background map. 17. The apparatus of claim 15, further comprising: a plurality of additional persona identification modules configured to generate a corresponding plurality of additional persona probability maps based on the at least one frame of pixel data; anda combiner module configured to generate an aggregate persona probability map based on the hair-identification probability map and the plurality of additional persona probability maps,wherein the persona extraction module being configured to generate the persona image by extracting pixels from the at least one frame of pixel data based at least in part on the generated hair-identification probability map comprises the persona extraction module being configured to generate the persona image by extracting pixels from the at least one frame of pixel data based on the aggregate persona probability map.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (107)
Cipolla Roberto (Cambridge GBX) Okamoto Yasukazu (Chiba-ken JPX) Kuno Yoshinori (Osaka-fu JPX), 3D human interface apparatus using motion recognition based on dynamic image processing.
Panahpour Tehrani, Mehrdad; Ishikawa, Akio; Sakazawa, Shigeyuki, Apparatus, method and computer program for classifying pixels in a motion picture as foreground or background.
Clanton,Charles H.; Ventrella,Jeffrey J.; Paiz,Fernando J., Cinematic techniques in avatar-centric communication during a multi-user online simulation.
DeMenthon Daniel F. (Columbia MD), Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monito.
Tian, Dihong; Mauchly, J. William; Friel, Joseph T., Generating and rendering synthesized views with multiple video streams in telepresence video conference sessions.
Iwamoto, Masayuki; Fujimura, Koichi, Image processing apparatus, method for processing and image and computer-readable recording medium for causing a computer to process images.
Carter, James; Yaacob, Arik; Darrah, James F., Managing the layout of multiple video streams displayed on a destination display screen during a videoconference.
Bang, Gun; Um, Gi-Mun; Chang, Eun-Young; Kim, Taeone; Hur, Nam-Ho; Kim, Jin-Woong; Lee, Soo-In, Method and apparatus for improving quality of depth image.
Haskell, Barin Geoffry; Puri, Atul; Schmidt, Robert Lewis, Scene description nodes to support improved chroma-key shape representation of coded arbitrary images and video objects.
Mackie, David J.; Tian, Dihong; Weir, Andrew P.; Buttimer, Maurice; Friel, Joseph T.; Mauchly, J. William; Chen, Wen-Hsiung, System and method for providing enhanced video processing in a network environment.
Prahlad, Anand; Schwartz, Jeremy A.; Ngo, David; Brockway, Brian; Muller, Marcus S., Systems and methods for classifying and transferring information in a storage network.
Weiser, Reginald; McGravie, Richard; Diouskine, Roman; Teboul, Jeremy, Systems and methods for providing video conferencing services via an ethernet adapter.
Rudolph, Eric; Rui, Yong; Malvar, Henrique S; He, Li Wei; Cohen, Michael F; Tashev, Ivan, Systems and methods for real-time audio-visual communication and data collaboration in a network conference environment.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.