Operation-discerning apparatus and apparatus for discerning posture of subject
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06K-009/46
G06K-009/66
H04N-007/14
출원번호
UP-0958430
(2004-10-06)
등록번호
US-7813533
(2010-11-01)
우선권정보
JP-2003-347146(2003-10-06)
발명자
/ 주소
Maeda, Masahiro
Kato, Noriji
Ikeda, Hitoshi
출원인 / 주소
Fuji Xerox Co., Ltd.
대리인 / 주소
Oliff & Berridge, PLC
인용정보
피인용 횟수 :
0인용 특허 :
10
초록▼
The recognition apparatus shoots pictures of at least a portion of a subject including a human face and obtains a sequence of image data. The facial portion is recognized from the image data. Each image data in the sequence of image data is processed successively. From the image data, an image regio
The recognition apparatus shoots pictures of at least a portion of a subject including a human face and obtains a sequence of image data. The facial portion is recognized from the image data. Each image data in the sequence of image data is processed successively. From the image data, an image region including the facial portion is identified. Either a color of the subject within a region defined based on the identified image region or the result of detection of moving regions is used for processing for identifying the image region including the facial portion.
대표청구항▼
What is claimed is: 1. A recognition apparatus for recognizing a facial portion of a person containing a face by taking a sequence of color image data, the recognition apparatus comprising: an image conversion unit that converts single color image data including the facial portion into single grays
What is claimed is: 1. A recognition apparatus for recognizing a facial portion of a person containing a face by taking a sequence of color image data, the recognition apparatus comprising: an image conversion unit that converts single color image data including the facial portion into single grayscale image data; a facial portion-identifying unit that performs a first process that identifies a facial image region of the single grayscale image data, and performs a second process that identifies a facial image region of the single color image data when the first process fails to identify the facial image region; an acquisition unit that obtains skin color information about skin color of the person from pixel values within a region of the single color image data corresponding to the facial image region identified by the first process; and a storage unit that stores the obtained skin color information as stored skin color information; wherein the facial portion-identifying unit performs the second process that identifies the facial image region based on a center of gravity of color, the center of gravity of color being calculated based on the stored skin color information, the stored skin color information being obtained from another single color image data that was converted into another single color grayscale image data and processed by the first process before the single image data. 2. A recognition apparatus for recognizing a facial portion of a person containing a face by taking a sequence of color image data, the recognition apparatus comprising: an image conversion unit that converts each color image data into a grayscale image data; a facial portion-identifying unit that identifies a facial image region including the facial portion by performing a first process that identifies a facial image region of the grayscale image data corresponding to each color image data, and performing a second process that identifies a facial image region of each color image data when the first process fails to identify the facial image region in the gray scale image data corresponding to each color image data; a detection unit that detects state of movement of a candidate region within image data, the candidate region being determined based on information representing the facial image region, the facial image region being represented by previous image data previously processed; and a facial portion region estimation unit that estimates the facial image region within current image data, based on the detected state of movement, wherein the detection unit includes a processor, and the detection unit (i) calculates an amount of movement of each pixel in the candidate region based on the current image data and the previous image data, (ii) calculates an average of the amount of movement of all pixels in the candidate region, and (iii) detects the state of movement of the candidate region based on the average, and the amount of movement of each pixel is a number of pixels from a first position at which a portion corresponding to one pixel is within the previous image data to a second position at which that portion is within the current image data. 3. The recognition apparatus according to claim 2, further comprising: an acquisition unit that obtains skin color information about skin color of the person from pixel values within the facial image region identified by the facial portion-identifying unit; and a storage unit that stores the obtained skin color information as stored skin color information; wherein the facial portion-identifying unit identifies the facial image region from currently processed image data by using the stored skin color information, the stored skin color information being obtained from previously processed image data, and wherein the facial portion-identifying unit (i) searches one or more candidate regions that have a plurality of pixels that are adjacent within the image data and (ii) identifies one candidate image region as the facial image region based on a position of each pixel in each candidate region and differences between the pixel value of each pixel and the stored skin color information. 4. A recognition apparatus for recognizing a subject by shooting the subject and taking a sequence of color image data, the recognition apparatus comprising: an image conversion unit that converts single color image data including the facial portion into single grayscale image data; a subject identifying unit that performs a first process that identifies a subject image region of the single grayscale image data and performs a second process that identifies a subject image region of the single color image data when the first process fails to identify the subject image region; an acquisition unit that obtains subject color information about a color of the subject from pixel values within a region of the single color image data corresponding to the subject image region identified by the first process; and a storage unit that stores the obtained subject color information as stored subject color information; wherein the subject identifying unit performs the second process that identifies the subject image region based on a center of gravity of color, the center of gravity of color being calculated based on the stored subject color information, the stored subject color information being obtained from another single color image data that was converted into another single color grayscale image data and processed by the first process before the single image data. 5. A recognition apparatus for recognizing a subject by shooting the subject and taking a sequence of color image data, the recognition apparatus comprising: an image conversion unit that converts each color image data into a grayscale image data; a subject portion-identifying unit that identifies a subject image region including the subject by performing a first process that identifies a facial image region of the grayscale image data corresponding to each color image data, and performing a second process that identifies a facial image region of each color image data when the first process fails to identify the facial image region in the gray scale image data corresponding to each color image data; a detection unit that detects state of movement of a candidate region within previous image data previously processed, the candidate region being determined based on information representing the subject image region; and a subject region estimation unit that estimates the subject image region based on the detected state of movement, the image region being represented by current image data, wherein the detection unit includes a processor, and the detection unit (i) calculates an amount of movement of each pixel in the candidate region based the current image data and the previous image data, (ii) calculates an average of the amount of movement of all pixels in the candidate region, and (iii) detects the state of movement of the candidate region based on the average, and the amount of movement of each pixel is a number of pixels from a first position at which a portion corresponding to one pixel is within the previous image data to a second position at which that portion is within the current image data. 6. A computer-implemented method of recognizing a facial portion of a person containing a face by taking a sequence of color image data, which has been obtained by shooting pictures including at least a portion of the person containing the face, and recognizing the facial portion from the image data, the method comprising: a processor converting single color image data including the facial portion into single grayscale image data; a processor performing a first process that identifies a facial image region of the single grayscale image data; a processor performing a second process that identifies a facial image region of the single color image data when the first process fails to identify the facial image region; a processor obtaining skin color information about skin color of the person from pixel values within a region of the single color image data corresponding to the facial image region identified by the first process; and a processor storing the obtained skin color information into a storage unit as stored skin color information; wherein in the step of performing the second process, the stored skin color information, that is associated with image data previously processed, is used for processing the facial image region based on a center of gravity of color, the center of gravity of color being calculated based on the stored skin color information, the stored skin color information being obtained from another single color image data that was converted into another single color grayscale image data and processed by the first process before the single image data. 7. A computer-implemented method used by a recognition device, the method of recognizing a facial portion of a person containing a face by taking a sequence of color image data, which has been obtained by shooting pictures including at least a portion of the person containing the face, and recognizing the facial portion from the image data, the method comprising: an image conversion unit that converts each color image data into a grayscale image data; a processor identifying a facial image region including the facial portion by performing a first process that identifies a facial image region of the grayscale image data corresponding to each color image data, and performing a second process that identifies a facial image region of each color image data when the first process fails to identify the facial image region in the gray scale image data corresponding to each color image data; a processor detecting state of movement of a candidate region within previous image data previously processed, the candidate region being determined based on information representing the facial image region; and a processor estimating the image region including the facial portion within current image data, based on the detected state of movement, wherein when the state of movement is detected, (i) an amount of movement of each pixel in the candidate region is calculated based on the current image data and the previous image data, (ii) an average of the amount of movement of all pixels in the candidate region is calculated, and (iii) the state of movement of the candidate region is detected based on the average, and the amount of movement of each pixel is a number of pixels from a first position at which a portion corresponding to one pixel is within the previous image data to a second position at which that portion is within the current image data. 8. A computer readable non-transitory medium encoded with computer readable instructions for recognizing a facial portion of a person containing a face by taking a sequence of color image data, the instructions comprising: converting single color image data including the facial portion into single grayscale image data; performing a first process that identifies a first facial image region of the single grayscale image data; performing a second process that identifies a facial image region of the single color image when the first process fails to identify the facial image region; obtaining skin color information about skin color of the person from pixel values within a region of the single color image data corresponding to the facial image region identified by the first process; and storing the obtained skin color information into a storage unit as stored skin color information; whereby in the instruction of performing the second process, the facial image region is identified based on a center of gravity of color, the center of gravity of color being calculated based on the stored skin color information, the stored skin color information being obtained from another single color image data that was converted into another single color grayscale image data and processed by the first process before the single image data. 9. A computer readable non-transitory medium encoded with computer readable instructions for recognizing a facial portion of a person containing a face by taking a sequence of color image data, the instructions comprising: an image conversion unit that converts each color image data into a grayscale image data; identifying a facial image region including the facial portion by performing a first process that identifies a facial image region of the grayscale image data corresponding to each color image data, and performing a second process that identifies a facial image region of each color image data when the first process fails to identify the facial image region in the gray scale image data corresponding to each color image data; detecting state of movement of a candidate region within previous image data previously processed, the candidate region being determined based on information representing the facial image region including the identified facial portion; and estimating the image region including the facial portion within the current image data based on the detected state of movement, wherein when the state of movement is detected, (i) an amount of movement of each pixel in the candidate region is calculated based on the current image data and the previous image data, (ii) an average based on amounts of movement of all pixels in the candidate region is calculated, and (iii) the state of movement of the candidate region is detected based on the average, and the amount of movement of each pixel is a number of pixels from a first position at which a portion corresponding to one pixel is within the previous image data to a second position at which that portion is within the current image data.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (10)
Edanami Takafumi,JPX, Display control system for videoconference terminals.
Schonfeld,Dan; Hariharakrishnan,Karthik; Raffy,Philippe; Yassa,Fathy, Occlusion/disocclusion detection using K-means clustering near object boundary with comparison of average motion of clusters to object and background motions.
Okubo,Atsushi; Sabe,Kohtaro; Kawamoto,Kenta; Fukuchi,Masaki, Robot device and face identifying method, and image identifying device and image identifying method.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.