IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0817102
(2010-06-16)
|
등록번호 |
US-8639020
(2014-01-28)
|
발명자
/ 주소 |
- Kutliroff, Gershom
- Bleiweiss, Amit
- Glazer, Itamar
- Madmoni, Maoz
|
출원인 / 주소 |
|
대리인 / 주소 |
Blakely, Sokoloff, Taylor & Zafman LLP
|
인용정보 |
피인용 횟수 :
17 인용 특허 :
25 |
초록
▼
A method for modeling and tracking a subject using image depth data includes locating the subject's trunk in the image depth data and creating a three-dimensional (3D) model of the subject's trunk. Further, the method includes locating the subject's head in the image depth data and creating a 3D mod
A method for modeling and tracking a subject using image depth data includes locating the subject's trunk in the image depth data and creating a three-dimensional (3D) model of the subject's trunk. Further, the method includes locating the subject's head in the image depth data and creating a 3D model of the subject's head. The 3D models of the subject's head and trunk can be exploited by removing pixels from the image depth data corresponding to the trunk and the head of the subject, and the remaining image depth data can then be used to locate and track an extremity of the subject.
대표청구항
▼
1. A method performed by a processor comprising: receiving, at the processor, image depth data, wherein the image depth data includes depth data of a subject to be modeled;locating a trunk of the subject from the image depth data, and creating a three-dimensional (3D) model of the trunk of the subje
1. A method performed by a processor comprising: receiving, at the processor, image depth data, wherein the image depth data includes depth data of a subject to be modeled;locating a trunk of the subject from the image depth data, and creating a three-dimensional (3D) model of the trunk of the subject;locating a head of the subject using the 3D model of the trunk and the image depth data, and creating a 3D model of the head of the subject;using the 3D models of the trunk and the head of the subject to remove a subset of data from the image depth data corresponding to the trunk and the head of the subject; andusing remaining image depth data to locate an extremity of the subject,wherein the 3D model of the trunk of the subject is a parametric cylinder model, andfurther wherein parameters of a cylinder of the parametric cylinder model are determined using a least-squares approximation based on the image depth data corresponding to the trunk of the subject. 2. The method of claim 1, wherein using remaining image depth data to locate the extremity of the subject comprises: detecting a blob, from the remaining image depth data, that corresponds to an arm;determining whether the blob corresponds to a right arm or a left arm; andcalculating where a hand and an elbow are located based at least on the blob. 3. The method of claim 2, wherein an inverse kinematics solver is used to determine whether the blob corresponds to the right arm or the left arm and to calculate where the hand and the elbow are located. 4. The method of claim 2 further comprising tracking a location of the subject in a sequence of images, wherein each image has its own image depth data. 5. The method of claim 4 further comprising recognizing a gesture performed by the subject, wherein recognizing a gesture includes storing a plurality of locations of the subject and comparing the plurality of locations of the subject to gestures in a gesture database. 6. The method of claim 1 wherein the subject is a human. 7. The method of claim 1 wherein the subject is an animal. 8. A system comprising: an image sensor that acquires image depth data;a background engine that creates a model of an image background from the image depth data;a subject manager that determines a subset of the image depth data that corresponds to a subject; anda subject tracking engine communicatively coupled to the image sensor, the background engine, and the subject manager, wherein the subject tracking engine: creates a three-dimensional (3D) model of a torso and a head of a subject based on the model of the image background and the subset of the image depth data; andlocates an extremity of the subject by using the 3D model of the torso and the head of the subject and the subset of the image depth data, without using color data,wherein the 3D model of the torso of the subject is a parametric cylinder model, andfurther wherein parameters of a cylinder of the parametric cylinder model are computed using a least-squares approximation based on image depth data corresponding to the torso. 9. The system of claim 8, wherein the image sensor acquires image depth data for a plurality of sequential images, and the subject tracking engine comprises a two-dimensional torso tracking engine that determines and tracks a torso location of the subject in the sequential images. 10. The system of claim 9, wherein the subject tracking engine further comprises a pelvis locating engine that determines a pelvis location of the subject based at least on the torso location from the two-dimensional torso tracking engine. 11. The system of claim 10, wherein the subject tracking engine further comprises a 3D torso tracking engine that creates and tracks the 3D model of the torso of the subject in the sequential images based on the image depth data and the torso location. 12. The system of claim 8, wherein the subject tracking engine further comprises a head tracking engine that locates and tracks the head of the subject in the sequential images based on the image depth data and the 3D model of the torso of the subject. 13. The system of claim 12, wherein the subject tracking engine further comprises an arm tracking engine that locates and tracks an arm of the subject in the sequential images based on the image depth data, the 3D model of the torso of the subject, and the location of the head of the subject. 14. The system of claim 13, wherein the arm tracking engine is that tracks the arm of the subject based upon a number of arms identified in the image depth data. 15. The system of claim 13, wherein the subject tracking engine further comprises a leg tracking engine that locates and tracks the legs of the subject in the sequential images based on the image depth data and the location of the torso, the pelvis, the head, and arms of the subject. 16. A system for modeling a subject comprising: means for acquiring image depth data;means for creating a model of an image background from the image depth data, wherein the means for creating receives the image depth data via a direct connection to the means for acquiring image depth data;means for creating a three-dimensional (3D) model of a torso of the subject based on the model of the image background;means for creating a 3D model of the head of the subject based on the 3D model of the torso; andmeans for locating an extremity of the subject from the image depth data by using the 3D model of the torso and the head of the subject, wherein the 3D model of the torso and head of the subject and the location of the extremity of the subject are processed locally to the means for creating the 3D model by an interactive program; andmeans for displaying a user's experience with the interactive program,wherein the 3D model of the torso of the subject is a parametric cylinder model, andfurther wherein parameters of a cylinder of the parametric cylinder model are determined using a least-squares approximation based on the image depth data corresponding to the torso of the subject.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.