최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0618396 (2006-12-29) |
등록번호 | US-7412077 (2008-08-12) |
발명자 / 주소 |
|
출원인 / 주소 |
|
인용정보 | 피인용 횟수 : 350 인용 특허 : 4 |
A method for head pose estimation may include receiving block motion vectors for a frame of video from a block motion estimator, selecting at least one block for analysis, determining an average motion vector for the at least one selected block, combining the average motion vectors over time (all pa
A method for head pose estimation may include receiving block motion vectors for a frame of video from a block motion estimator, selecting at least one block for analysis, determining an average motion vector for the at least one selected block, combining the average motion vectors over time (all past frames of video) to determine an accumulated average motion vector, estimating the orientation of a user's head in the video frame based on the accumulated average motion vector, and outputting at least one parameter indicative of the estimated orientation.
What is claimed is: 1. A method for head pose estimation, comprising: receiving block motion vectors for a frame of video from a block motion estimator; selecting at least one block for analysis; determining an average motion vector for the at least one selected block; combining the determined aver
What is claimed is: 1. A method for head pose estimation, comprising: receiving block motion vectors for a frame of video from a block motion estimator; selecting at least one block for analysis; determining an average motion vector for the at least one selected block; combining the determined average motion vector with average motion vectors from past frames of video to obtain an accumulated average motion vector; estimating the orientation of a user's head in the video frame based on the accumulated average motion vector; and outputting at least one parameter indicative of the estimated orientation. 2. The method of claim 1, wherein the selecting includes comparing magnitudes of the received motion vectors with a threshold. 3. The method of claim 1, further comprising: determining whether the user's head in the video frame is making a physical gesture based on the accumulated average motion vector; and outputting a signal indicative of the physical gesture when it is determined that the physical gesture is made. 4. The method of claim 3, wherein the step of determining whether the user's head in the video frame is making a physical gesture comprises: generating a waveform of accumulated average motion vectors for the at least one selected block; classifying positions along the waveform as a positive, a negative, or a zero state; determining that a physical gesture has been made when the waveform completes one cycle; and sending a signal indicative of the physical gesture when it is determined that the physical gesture has been made. 5. The method of claim 4, wherein the physical gesture comprises one of a head nod and a head shake. 6. The method of claim 4, further comprising: generating an avatar having a pose substantially corresponding to the estimated orientation of the user's head; and controlling motion of the avatar to imitate the physical gesture based on the signal. 7. The method of claim 1, further comprising generating an avatar having a pose substantially corresponding to the estimated orientation of the user's head. 8. An apparatus for head pose estimation, comprising: a block motion estimator module configured to receive frames of video; a head pose estimator configured to receive block motion vectors from the block motion estimator, select at least one block for analysis, determine an average motion vector for the at least one selected block, combine the determined average motion vector with average motion vectors from past frames of video to obtain an accumulated average motion vector, estimate the orientation of a user's head in the video frame based on the accumulated average motion vector, and output at least one parameter indicative of the estimated orientation. 9. The apparatus of claim 8, wherein the head pose estimator is further configured to select the at least one block by comparing magnitudes of the received motion vectors with a threshold. 10. The apparatus of claim 8, wherein the head pose estimator is further configured to determine whether the user's head in the video frame is making a physical gesture based on the accumulated average motion vector and output a signal indicative of the physical gesture when it is determined that the physical gesture is made. 11. The apparatus of claim 10, wherein the head pose estimator determines whether the user's head is making a physical gesture by generating a waveform of accumulated average motion vectors across a number of past frames for the at least one selected block classifying positions along the waveform as a positive, a negative, or a zero state, and determining that a physical gesture has been made when the waveform completes one cycle, and the head pose estimator is further configured to send a signal indicative of the physical gesture when it is determined that the physical gesture has been made. 12. The apparatus of claim 10, wherein the physical gesture comprises one of a head nod and a head shake. 13. A wireless communication device, comprising: a transceiver configured to send and receive signals; a block motion estimator module configured to receive frames of video; a head pose estimator configured to receive block motion vectors from the block motion estimator, select at least one block for analysis, determine an average motion vector for the at least one selected block, combine the determined average motion vector with average motion vectors from past frames of video to obtain an accumulated average motion vector, estimate the orientation of a user's head in the video frame based on the accumulated average motion vector for the at least one selected block, and output at least one parameter indicative of the estimated orientation. 14. The device of claim 13, wherein the head pose estimator is further configured to select the at least one block by comparing magnitudes of the received motion vectors with a threshold. 15. The device of claim 13, wherein the head pose estimator is further configured to determine whether the user's head in the video frame is making a physical gesture based on the accumulated average motion vector and output a signal indicative of the physical gesture when it is determined that the physical gesture is made. 16. The device of claim 15, wherein the head pose estimator determines whether the user's head is making a physical gesture by generating a waveform of accumulated average motion vectors for the at least one selected block classifying positions along the waveform as a positive, a negative, or a zero state, and determining that a physical gesture has been made when the waveform completes one cycle, and the head pose estimator is further configured to send a signal indicative of the physical gesture when it is determined that the physical gesture has been made. 17. The device of claim 15, wherein the physical gesture comprises one of a head nod and a head shake. 18. The device of claim 15, wherein the wireless communication device cooperates with a second communication device to form a wireless communication system, the second communication device being configured to generate an avatar having a pose substantially corresponding to the estimated orientation of the user's head and control motion of the avatar to imitate the physical gesture based on the signal. 19. The device of claim 13, wherein the wireless communication device cooperates with a second communication device to form a wireless communication system, the second communication device being configured to generate an avatar having a pose substantially corresponding to the estimated orientation of the user's head.
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.