Methods and systems for combining foreground video and background video using chromatic matching
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-009/74
H04N-009/73
H04N-005/272
H04N-009/12
출원번호
US-0181653
(2016-06-14)
등록번호
US-9883155
(2018-01-30)
발명자
/ 주소
Patel, Sanjay
Yarkony, Elad
출원인 / 주소
PERSONIFY, INC.
대리인 / 주소
Invention Mine LLC
인용정보
피인용 횟수 :
0인용 특허 :
116
초록▼
Disclosed herein are methods and systems for combining foreground video and background video using chromatic matching. In an embodiment, a system obtains foreground video data. The system obtains background video data. The system determines a color-distribution dimensionality of the background video
Disclosed herein are methods and systems for combining foreground video and background video using chromatic matching. In an embodiment, a system obtains foreground video data. The system obtains background video data. The system determines a color-distribution dimensionality of the background video data to be either high-dimensional chromatic or low-dimensional chromatic. The system selects a chromatic-adjustment technique from a set of chromatic-adjustment techniques based on the determined color-distribution dimensionality of the background video data. The system adjusts the foreground video data using the selected chromatic-adjustment technique. The system generates combined video data at least in part by combining the background video data with the adjusted foreground video data. The system outputs the combined video for display.
대표청구항▼
1. A method comprising: obtaining foreground video data;obtaining background video data;determining a color-distribution dimensionality of the background video data to be either high-dimensional chromatic or low-dimensional chromatic;selecting a chromatic-adjustment technique from a set of chromatic
1. A method comprising: obtaining foreground video data;obtaining background video data;determining a color-distribution dimensionality of the background video data to be either high-dimensional chromatic or low-dimensional chromatic;selecting a chromatic-adjustment technique from a set of chromatic-adjustment techniques based on the determined color-distribution dimensionality of the background video data;adjusting the foreground video data using the selected chromatic-adjustment technique;generating combined video data at least in part by combining the background video data with the adjusted foreground video data; andoutputting the combined video data for display. 2. The method of claim 1, wherein determining the color-distribution dimensionality of the background video data to be either high-dimensional chromatic or low-dimensional chromatic comprises: converting pixels of the background video data from an {R,G,B} color space to an {L,a,b} color space;calculating an {a,b} variance of the converted background pixels;comparing the calculated {a,b} variance to an {a,b}-variance threshold;determining the color-distribution dimensionality of the background video data to be high-dimensional chromatic if the calculated {a,b} variance exceeds the {a,b}-variance threshold; anddetermining the color-distribution dimensionality of the background video data to be low-dimensional chromatic if the calculated {a,b} variance does not exceed the {a,b}-variance threshold. 3. The method of claim 2, wherein: calculating the {a,b} variance of the converted background pixels comprises determining how many luminance levels in the converted background pixels have more than a luminance-level-specific degree of {a,b} variance; andthe {a,b}-variance threshold is a threshold number of luminance levels. 4. The method of claim 3, wherein the threshold number of luminance levels is zero. 5. The method of claim 3, wherein the threshold number of luminance levels is greater than zero. 6. The method of claim 2, wherein calculating the {a,b} variance of the converted background pixels comprises: determining a respective luminance-level-specific {a,b} variance for each of a plurality of luminance levels that are represented in the converted background pixels; andcalculating the {a,b} variance of the converted background pixels to be a sum of the determined luminance-level-specific {a,b} variances. 7. The method of claim 2, wherein calculating the {a,b} variance of the converted background pixels comprises: determining a respective luminance-level-specific {a,b} variance for each luminance level represented in the converted background pixels; andcalculating the {a,b} variance of the converted background pixels to be a sum of the determined luminance-level-specific {a,b} variances. 8. The method of claim 1, wherein determining the color-distribution dimensionality of the background video data to be either high-dimensional chromatic or low-dimensional chromatic comprises: determining the color-distribution dimensionality of the background video data to be low-dimensional chromatic if a background-color distribution of the background video data in an {L,a,b} color space is supported by a relationship defined by: {(L,a,b)|a=fa(L),b=fb(L)},where fa and fb are functions; andotherwise determining the color-distribution dimensionality of the background video data to be high-dimensional chromatic. 9. The method of claim 1, wherein: the set of chromatic-adjustment techniques includes a white-balancing technique and a chromatic-replacement technique; andselecting a chromatic-adjustment technique based on the determined color-distribution dimensionality comprises: selecting the white-balancing technique when the color-distribution dimensionality of the background video data is determined to be high-dimensional chromatic; andselecting the chromatic-replacement technique when the color-distribution dimensionality of the background video data is determined to be low-dimensional chromatic. 10. The method of claim 9, wherein adjusting the foreground video data using the white-balancing technique comprises: determining a foreground average of pixels of the foreground video data in an {R,G,B} color space;determining a background average of pixels of the background video data in the {R,G,B} color space;converting the foreground average and the background average from the {R,G,B} color space to a second color space;determining a transform matrix in the second color space from the converted foreground average to the converted background average;converting the pixels of the foreground video data from the {R,G,B} color space to the second color space;transforming the converted foreground pixels in the second color space using the determined transform matrix; andconverting the transformed foreground pixels from the second color space to the {R,G,B} color space. 11. The method of claim 10, wherein the determined transform matrix comprises dimension-wise ratios in the second color space of the converted background average to the converted foreground average. 12. The method of claim 10, wherein the second color space is an {L,a,b} color space. 13. The method of claim 10, wherein the second color space is an {X,Y,Z} color space. 14. The method of claim 13, wherein converting the foreground pixels from the {R,G,B} color space to the second color space comprises converting the foreground pixels from the {R,G,B} color space to an {L,a,b} color space and then from the {L,a,b} color space to the {X,Y,Z} color space. 15. The method of claim 9, further comprising converting pixels of the background video data from an {R,G,B} color space to an {L,a,b} color space, wherein adjusting the foreground video data using the chromatic-replacement technique comprises: generating an L-to-{a,b} lookup table based on the converted background pixels;converting pixels of the foreground video data from the {R,G,B} color space to the {L,a,b} color space;transforming the converted foreground pixels at least in part by: using the respective L values of the respective converted foreground pixels to select respective replacement {a,b} values for the respective converted foreground pixels based on the L-to-{a,b} lookup table; andreplacing the respective {a,b} values of the respective converted foreground pixels with the corresponding respective selected replacement {a,b} values; andconverting the transformed foreground pixels from the {L,a,b} color space to the {R,G,B} color space. 16. The method of claim 15, wherein using the respective L values of the respective converted foreground pixels to select the respective replacement {a,b} values for the respective converted foreground pixels based on the L-to-{a,b} lookup table comprises: retrieving the respective replacement {a,b} values from the L-to-{a,b} lookup table in cases where the respective L value of the respective converted foreground pixel is listed in the L-to-{a,b} lookup table. 17. The method of claim 16, wherein using the respective L values of the respective converted foreground pixels to select the respective replacement {a,b} values for the respective converted foreground pixels based on the L-to-{a,b} lookup table further comprises: using interpolated {a,b} values based on one or more entries in the L-to-{a,b} lookup table as the respective replacement {a,b} values in cases where the respective L value of the respective converted foreground pixel is not listed in the L-to-{a,b} lookup table. 18. The method of claim 17, wherein the interpolated {a,b} values are copied from a nearest L value that is listed in the L-to-{a,b} lookup table. 19. The method of claim 17, wherein the interpolated {a,b} values are average {a,b} values of two or more proximate entries in the L-to-{a,b} lookup table. 20. The method of claim 1, further comprising: obtaining second foreground video data; andadjusting the second foreground video data using the selected chromatic-adjustment technique,wherein generating the combined video data comprises combining the background video data with both the adjusted foreground video data and the adjusted second foreground video data. 21. A system comprising: a communication interface;a processor; anda non-transitory computer-readable medium storing instructions executable by the processor for causing the system to perform functions including: obtaining foreground video data;obtaining background video data;determining a color-distribution dimensionality of the background video data to be either high-dimensional chromatic or low-dimensional chromatic;selecting a chromatic-adjustment technique from a set of chromatic-adjustment techniques based on the determined color-distribution dimensionality of the background video data;adjusting the foreground video data using the selected chromatic-adjustment technique;generating combined video data at least in part by combining the background video data with the adjusted foreground video data; andoutputting the combined video data for display.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (116)
Cipolla Roberto (Cambridge GBX) Okamoto Yasukazu (Chiba-ken JPX) Kuno Yoshinori (Osaka-fu JPX), 3D human interface apparatus using motion recognition based on dynamic image processing.
Panahpour Tehrani, Mehrdad; Ishikawa, Akio; Sakazawa, Shigeyuki, Apparatus, method and computer program for classifying pixels in a motion picture as foreground or background.
Clanton,Charles H.; Ventrella,Jeffrey J.; Paiz,Fernando J., Cinematic techniques in avatar-centric communication during a multi-user online simulation.
DeMenthon Daniel F. (Columbia MD), Computer vision system for position monitoring in three dimensions using non-coplanar light sources attached to a monito.
Tian, Dihong; Mauchly, J. William; Friel, Joseph T., Generating and rendering synthesized views with multiple video streams in telepresence video conference sessions.
Iwamoto, Masayuki; Fujimura, Koichi, Image processing apparatus, method for processing and image and computer-readable recording medium for causing a computer to process images.
Carter, James; Yaacob, Arik; Darrah, James F., Managing the layout of multiple video streams displayed on a destination display screen during a videoconference.
Bang, Gun; Um, Gi-Mun; Chang, Eun-Young; Kim, Taeone; Hur, Nam-Ho; Kim, Jin-Woong; Lee, Soo-In, Method and apparatus for improving quality of depth image.
Yeh Hwa-Young M ; Lure Yuan-Ming F ; Lin Jyh-Shyan, Method and system for re-screening nodules in radiological images using multi-resolution processing, neural network, and image processing.
Berman Arie ; Vlahos Paul ; Dadourian Arpag, Method for removing from an image the background surrounding a selected subject by generating candidate mattes.
Colmenarez, Antonio J.; Gutta, Srinivas, Person tagging in an image processing system utilizing a statistical model based on both appearance and geometric features.
Haskell, Barin Geoffry; Puri, Atul; Schmidt, Robert Lewis, Scene description nodes to support improved chroma-key shape representation of coded arbitrary images and video objects.
Mackie, David J.; Tian, Dihong; Weir, Andrew P.; Buttimer, Maurice; Friel, Joseph T.; Mauchly, J. William; Chen, Wen-Hsiung, System and method for providing enhanced video processing in a network environment.
Prahlad, Anand; Schwartz, Jeremy A.; Ngo, David; Brockway, Brian; Muller, Marcus S., Systems and methods for classifying and transferring information in a storage network.
Weiser, Reginald; McGravie, Richard; Diouskine, Roman; Teboul, Jeremy, Systems and methods for providing video conferencing services via an ethernet adapter.
Rudolph, Eric; Rui, Yong; Malvar, Henrique S; He, Li Wei; Cohen, Michael F; Tashev, Ivan, Systems and methods for real-time audio-visual communication and data collaboration in a network conference environment.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.