IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0621456
(2003-07-18)
|
등록번호 |
US-7352880
(2008-04-01)
|
우선권정보 |
KR-10-2002-0042485(2002-07-19) |
발명자
/ 주소 |
|
출원인 / 주소 |
- Samsung Electronics Co., Ltd.
|
대리인 / 주소 |
Buchanan Ingersoll & Rooney PC
|
인용정보 |
피인용 횟수 :
8 인용 특허 :
8 |
초록
▼
A face detection and tracking system for detecting and tracking a plurality of faces in real time from an input image comprises: a background removing unit which extracts an area having a motion by removing a background image from the input image; a candidate area extracting unit which extracts a ca
A face detection and tracking system for detecting and tracking a plurality of faces in real time from an input image comprises: a background removing unit which extracts an area having a motion by removing a background image from the input image; a candidate area extracting unit which extracts a candidate area in which a face can be located in the area having a motion by using a skin color probability map generated from a face skin color model and a global probability map a face area determination unit which extracts (ICA) features from a candidate area and determines whether the candidate area is a face area by using a trained SVM classifier; and a face area tracking unit which tracks the face area according to a directional kernel indicating a probability that a face is located in a next frame based on the skin color probability map.
대표청구항
▼
What is claimed is: 1. A face detection and tracking system for detecting and tracking a plurality of faces in real time from an input image, the system comprising: a background removing unit which extracts an area having a motion by removing a background image from the input image; a candidate are
What is claimed is: 1. A face detection and tracking system for detecting and tracking a plurality of faces in real time from an input image, the system comprising: a background removing unit which extracts an area having a motion by removing a background image from the input image; a candidate area extracting unit which extracts a candidate area in which a face is possibly located in the area having a motion by using a skin color probability map (Pskin) generated from a face skin color model and a global probability map (Pglobal), wherein the candidate area extracting unit comprises: a skin color probability map generation unit which generates the skin color probability map (Pskin) of the area having a motion by using the face skin color model, a global probability map generation unit which extracts a plurality of highest points of the area having a motion, sets central coordinates at a predetermined distance from the plurality of highest points, and calculates a probability that a face is located within a distance from the central coordinates to generate the global probability map (Pglobal), and a multiple scale probability map generation unit which generates a multiple scale probability map about the probability that a face is located, by multiplying the skin color probability map (Pskin) and the global probability map (Pglobal), and extracts an area, in which the probability value of the generated multiple scale probability map is equal to or greater than a predetermined threshold value, as the candidate area where a face is possibly located; a face area determination unit which extracts independent component analysis (ICA) features from the candidate area and determines whether the candidate area is a face area; and a face area tracking unit which tracks the face area according to a directional kernel indicating a probability that a face is located in a next frame based on the skin color probability map (Pskin). 2. The system of claim 1, wherein the skin color probability map generation unit converts a color of each pixel in the area having a motion into hue and saturation values, and applies the values to the face skin color model which is a 2-dimensional Gaussian model that is trained in advance with a plurality of skin colors, to generate the skin color probability map (Pskin) indicating a probability that the color of the area having a motion is one of the plurality of skin colors. 3. The system of claim 2, wherein when Hue(i,j) and Sat(i,j) denote the hue and saturation values at coordinates (i,j) of the area having a motion, respectively, {right arrow over (u)} and Σ denote average and distribution of Gaussian dispersion, respectively, and a size of a face desired to be searched for is n, the skin color probability map Pskin(x,y,n) is generated according to the following equation: 4. The system of claim 1, wherein when {right arrow over (u)}i denotes the central coordinates of the candidate area, Σ denotes a dispersion matrix, n denotes a size of a face area, and (xi,yi) denotes the coordinates of each local area (i), the global probability map generation unit generates the global probability map Pglobal (x,y,n) according to the following equation: where {right arrow over (u)}i, Σ, xi, and yi satisfies the following equations, respectively: 5. The system of claim 1, wherein the face area determination unit comprises: an ICA feature extracting unit which extracts features by performing ICA on the extracted face candidate area; and a face determination unit which determines whether the candidate area is a face by providing the ICA features of the candidate area to a support vector machine (SVM) which has learned features obtained by performing ICA on learning face images and features obtained by performing ICA on images that are not face images. 6. A face detection and tracking system for detecting and tracking a plurality of faces in real time from an input image, the system comprising: a background removing unit which extracts an area having a motion by removing a background image from the input image; a candidate area extracting unit which extracts a candidate area in which a face is possibly located in the area having a motion by using a skin color probability map (Pskin) generated from a face skin color model and a global probability map (Pglobal); a face area determination unit which extracts independent component analysis (ICA) features from the candidate area and determines whether the candidate area is a face area; and a face area tracking unit which tracks the face area according to a directional kernel indicating a probability that a face is located in a next frame based on the skin color probability map (Pskin), wherein when it is assumed that coordinates at which the center of a face is to be located and dispersion are (μx, μy,) and (σx,σy), respectively, and if the probability that a face is located is the directional kernel is expressed as f(x,y, σx, σy) in a direction in which the face area moves, and expressed as f(x,y, σx, σy) in an opposite direction. 7. The system of claim 1, wherein the background removing unit obtains a first area which is not a background by using a brightness difference of the input image and a background image stored in advance, and obtains a second area which is not the background by using a color difference of the the input image and the background image stored in advance, and among a plurality of sub-areas included in the second area that is not the background, extracts each sub-area which includes the center of a sub-area included in the first area that is not the background as areas that are not the background, to remove the background image from the input image and extract an area having a motion. 8. The system of claim 7, wherein the background removing unit updates a new background image R'(x,y) according to the following equation: description="In-line Formulae" end="lead"R'(x,y)=βR(x,y)+(1-β) B(x,y)description="In-line Formulae" end="tail" where R(x,y) denotes an existing background image, B(x,y) denotes a binary image in which an area having a motion is removed from the input image, and β denotes an update constant. 9. A face detection and tracking system for detecting and tracking a plurality of faces in real time from an input image, the system comprising: a background removing unit which obtains a first area which is not a background by using a brightness difference of the input image and a background image stored in advance, and obtains a second area which is not the background by using a color difference of the input image and the background image stored in advance, and among a plurality of sub-areas included in the second area that is not the background, extracts each sub-area which includes the center of a sub-area included in the first area that is not the background, as areas that are not background, to remove the background image from the input image and extract an area having a motion; a skin color probability map generation unit which generates a skin color probability map (Pskin) of the area having a motion, by using a face skin color model; a global probability map generation unit which extracts a plurality of highest points of the area having a motion, sets central coordinates at a predetermined distance from the plurality of highest points, calculates a probability that a face is located within a predetermined distance from the central coordinates, to generate a global probability map (Pglobal); a multiple scale probability map generation unit which generates a multiple scale probability map about the probability that a face is located by multiplying the skin color probability map (Pskin) and the global probability map (Pglobal), and extracts an area, in which the probability value of the generated multiple scale probability map is equal to or greater than a predetermined threshold value, as a candidate area where a face is possibly located; a face area determination unit which extracts independent component analysis (ICA) features from the candidate area and determines whether the candidate area is a face area by providing the ICA features of the candidate area to a support vector machine (SVM) which has learned features obtained by performing ICA on learning face images and features obtained by performing ICA on images that are not face images; and a face area tracking unit which tracks a face area according to a directional kernel indicating a probability that a face is located in a next frame, based on the skin color probability map (Pskin). 10. A face detection and tracking method for detecting and tracking a plurality of faces in real time from an input image, the method comprising: (a) extracting an area having a motion by removing a background image from the input image; (b) extracting a candidate area in which a face is possibly located in the area having a motion by using a skin color probability map (Pskin) generated from a face skin color model and a global probability map (Pglobal), wherein step (b) comprises: (b1) generating the skin color probability map (Pskin) of the area having a motion by using the face skin color model, (b2) extracting a plurality of highest points of the area having a motion, setting central coordinates at a predetermined distance from the plurality of highest points, and calculating a probability that a face is located within a predetermined distance from the central coordinates, to generate the global probability map (Pglobal), and (b3) generating a multiple scale probability map about the probability that a face is located by multiplying the skin color probability map (Pskin) and the global probability map (Pglobal), and extracting an area, in which the probability value of the generated multiple scale probability map is equal to or greater than a predetermined threshold value, as the candidate area where a face is possibly located; (c) extracting independent component analysis (ICA) features from a candidate area and determining whether the candidate area is a face area; and (d) tracking the face area according to a directional kernel indicating a probability that a face is located in a next frame, based on the skin color probability map (Pskin). 11. The method of claim 10, wherein in step (b1) color of each pixel in the area having a motion is converted into hue and saturation values and the values are applied to the face skin color model, which is a 2-dimensional Gaussian model that is trained in advance with a plurality of skin colors, to generate the skin color probability map (Pskin) indicating a probability that the color of an area having a motion is one of the plurality of skin colors. 12. The method of claim 11, wherein when Hue(i,j) and Sat(i,j) denote the hue and saturation values at coordinates (i,j) of the area having a motion, respectively, {right arrow over (u)} and Σ denote average and distribution of Gaussian dispersion, respectively, and a size of a face desired to be searched for is n, the skin color probability map Pskin(x,y,n) is generated according to the following equation: 13. The method of claim 10, wherein in step (b2), when {right arrow over (u)}i denotes the central coordinates of the candidate area, Σ denotes a dispersion matrix, n denotes a size of a face area, and (xi,yi) denotes the coordinates of each local area (i), the global probability map Pglobal (x,y,n) is generated according to the following equation: where {right arrow over (u)}i, Σ, xi, and yi satisfy the following equations, respectively: 14. The method of claim 10, wherein step (c) comprises: extracting features by performing ICA on the extracted face candidate area; and determining whether the candidate area is a face by providing the ICA features of the candidate area to a support vector machine (SVM) which has learned features obtained by performing ICA on learning face images and features obtained by performing ICA on images that are not face images. 15. A face detection and tracking method for detecting and tracking a plurality of faces in real time from an input image, the method comprising: (a) extracting an area having a motion by removing a background image from the input image; (b) extracting a candidate area in which a face is possibly located in the area having a motion by using a skin color probability map (Pskin) generated from a face skin color model and a global probability map (Pglobal); (c) extracting independent component analysis (ICA) features from a candidate area and determining whether the candidate area is a face area; and (d) tracking the face area according to a directional kernel indicating a probability that a face is located in a next frame, based on the skin color probability map (Pskin), wherein when it is assumed that the coordinates at which the center of a face is to be located and the dispersion are (μx, μy) and (σx,σy)respectively, and if the probability that a face is located is the directional kernel is expressed as f(x,y, σx, σy) in a direction in which the face area moves, and expressed as f(x,y, σx, σy) in an opposite direction. 16. The method of claim 10, wherein in step (a), a first area which is not a background is obtained by using a brightness difference of the input image and a background image stored in advance, and a second area which is not the background is obtained by using a color difference of the input image and a background image stored in advance, and among a plurality of sub-areas included in the second area that is not the background, each sub-area, which includes the center of a sub-area included in the first area that is not the background, is extracted as an area that is not the background, so that the background image is removed from the input image and an area having a motion is extracted. 17. The method of claim 16, wherein instep (a), a new background image R'(x,y) is updated according to the following equation: description="In-line Formulae" end="lead"R'(x,y)=βR(x,y)+(1-β) B(x,y)description="In-line Formulae" end="tail" where R(x,y) denotes an existing background image, B(x,y) denotes a binary image in which an area having a motion is removed from the input image, and β denotes an update constant. 18. A face detection and tracking method for detecting and tracking a plurality of faces in real time by combining visual information of an input image, the method comprising: obtaining a first area which is not a background by using a brightness difference of an input image and a background image stored in advance, and obtaining a second area which is not the background by using a color difference of the input image and a background image stored in advance, and among a plurality of sub-areas included in the second area that is not the background, extracting each sub-area which includes the center of a sub-area included in the first area that is not the background, as areas that are not background, to remove the background image from the input image and extract an area having a motion; generating a skin color probability map (Pskin) of the area having a motion by using a face skin color model; extracting a plurality of highest points of the area having a motion, setting central coordinates at a predetermined distance from the plurality of highest points, and calculating a probability that a face is located within a predetermined distance from the central coordinates, to generate a global probability map (Pglobal); generating a multiple scale probability map about the probability that a face is located by multiplying the skin color probability map (Pskin) and the global probability map (Pglobal), and extracting an area, in which the probability value of the generated multiple scale probability map is equal to or greater than a predetermined threshold value, as a candidate area where a face is possibly located; extracting independent component analysis (ICA) features from the candidate area and determining whether the candidate area is a face area by providing the ICA features of the candidate area to a support vector machine (SVM) which has learned features obtained by performing ICA on learning face images and features obtained by performing ICA on images that are not face images; and tracking a face area according to a directional kernel indicating a probability that a face is located in a next frame based on the skin color probability map. 19. A computer readable medium having embodied thereon a computer program operable to cause one or more machines to execute the method of claim 10.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.