Methods for capturing depth data of a scene and applying computer actions
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
A63F-013/00
H04N-021/475
G06K-009/00
H04N-021/4223
H04N-021/478
출원번호
US-0392044
(2009-02-24)
등록번호
US-8840470
(2014-09-23)
발명자
/ 주소
Zalewski, Gary
Haigh, Mike
출원인 / 주소
Sony Computer Entertainment America LLC
대리인 / 주소
Martine Penilla Group, LLP
인용정보
피인용 횟수 :
2인용 특허 :
226
초록▼
A computer-implemented method is provided to automatically apply predefined privileges for identified and tracked users in a space having one or more media sources. The method includes an operation to define and save to memory, a user profile. The user profile may include data that identifies and tr
A computer-implemented method is provided to automatically apply predefined privileges for identified and tracked users in a space having one or more media sources. The method includes an operation to define and save to memory, a user profile. The user profile may include data that identifies and tracks a user with a depth-sensing camera. In another operation privileges that define levels of access to particular media for the user profile are defined and saved. The method also includes an operation to capture image and depth data from the depth-sensing camera of a scene within the space. In yet another operation, the user is tracked and identified within the scene from the image and depth data. In still another operation the defined privileges are automatically applied to one or more media sources, so that the user is granted access to selected content from the one or more media sources.
대표청구항▼
1. A computer-implemented method, comprising: (a) defining and saving to a memory, a user profile, the user profile including data for identifying and tracking the user with a depth sensing camera;(b) defining and saving to the memory, animations to be integrated into a virtual world scene based on
1. A computer-implemented method, comprising: (a) defining and saving to a memory, a user profile, the user profile including data for identifying and tracking the user with a depth sensing camera;(b) defining and saving to the memory, animations to be integrated into a virtual world scene based on the user profile;(c) capturing a scene using the depth sensing camera;(d) identifying the user within the scene using the depth sensing camera, the identifying further configured to identify stationary objects in the scene, wherein points located on the stationary objects are used to at least partially outline the identified stationary objects; and(e) automatically applying the defined animations onto at least one identified stationary object in the scene to be displayed on a screen, such that the defined animations are selected for the identified and tracked user. 2. The method of claim 1, wherein capturing the scene includes filtering out stationary objects so as to focus on moving objects, and focusing on moving objects includes, focusing on a moving object is in the scene;analyzing features of the moving object using the image and depth data; anddetermining if the moving object correspond to the user. 3. The method of claim 2, wherein the user is one of a human or a pet. 4. The method of claim 3, wherein tracking the user further includes, displaying a history path of the user, the history path of the user identifying the movement over time and the animations associated with the movements. 5. The method of claim 4, further comprising: saving the history path to storage; andenabling replay of the history path. 6. The method of claim 1, wherein the animations are applied to contours of the at least one stationary object found in the scene, based on the captured depth data. 7. The method of claim 1, further comprising: pre-selecting the animations for the user, and pre-selecting other animations for other users. 8. The method of claim 1, wherein multimedia content is presented on the display screen along with the animations, based on the identified user. 9. A computer-implemented method, comprising: (a) defining and saving to a memory, a user profile, the user profile including data for identifying and tracking the user with a depth sensing camera;(b) defining and saving to the memory, animations to be applied into a virtual world scene associated with the user profile;(c) capturing a scene using the depth sensing camera;(d) identifying the user within the scene using the depth sensing camera;(e) automatically applying the defined animations onto objects or stationary objects found in the captured scene using point tracking, the defined animations being pre-defined for the identified and tracked user, so that a display screen shows the applied animations. 10. A computer implemented method, comprising: (a) defining a user profile, the user profile including image and depth data related to physical characteristics of a real-world user, the image and depth data captured by a depth-sensing camera;(b) capturing image and depth data for a scene using the depth-sensing camera, wherein point tracking is used to identify stationary objects in the scene, the points being used to draw outlines of stationary objects found in the scene;(c) identifying moving objects within the scene;(d) locking the depth-sensing camera onto a human head within the scene;(e) analyzing the image and depth data for the human head in real-time, the analysis including comparing image and depth data for the human head to user profile image and depth data related to physical characteristic, wherein a user is identified when image and depth data within the user profile substantially matches image and depth data for the head, and identifying animations pre-selected for the user profile when the user is identified; and(f) applying the identified animations onto selected ones of the stationary objects identified in the scene. 11. The method of claim 10, wherein defining a user profile includes: i. initiating a scan using a depth-sensing camera;ii. focusing the scan to a particular portion of the human body;iii. collecting image and depth data for the particular portion of the human body;iv. processing the collected image and depth data to generate a three-dimensional model of the particular portion of the human body; andv. saving the three-dimensional model to a memory, the three-dimensional model also being associated with a user profile. 12. The method of claim 10, further comprising: applying user permissions associated with the user profile when the user is identified.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (226)
Cipolla Roberto (Cambridge GBX) Okamoto Yasukazu (Chiba-ken JPX) Kuno Yoshinori (Osaka-fu JPX), 3D human interface apparatus using motion recognition based on dynamic image processing.
Marks, Richard L.; Mao, Xiadong; Zalewski, Gary M., Computer image and audio processing of intensity and input devices for interfacing with a computer program.
Nobuo Fukushima JP; Tomotaka Muramoto JP; Masayoshi Sekine JP, Display apparatus which detects an observer body part motion in correspondence to a displayed element used to input operation instructions to start a process.
Stoel Leon P. (Sioux Falls SD) Bankers David M. (Sioux Falls SD) Hills Vernon E. (Sioux Falls SD) Plucker Prentice J. (Chanceller SD) Cinco Christopher A. (Sioux Falls SD), Entertainment system and method for controlling connections between terminals and game generators and providing video ga.
Cartabiano Michael C. ; Curran Kenneth J. ; Dick David J. ; Gibbs Douglas R. ; Kirby Morgan H. ; May Richard L. ; Storer William J. A. ; Ullman Adam N., Hand-attachable controller with direction sensing.
Sata Hironori,JPX, Image generating system and information storage medium capable of changing angle of view of virtual camera based on object positional information.
Geagley Bradley K. ; Garlington Joseph O. ; Hagen-Brenner John T. ; Nadler Gary J. ; Redmann William G., Interactive entertainment attraction using telepresence vehicles.
Wallace,Jon K.; Luo,Yun; Dziadula,Robert; Khairallah,Farid, Method and apparatus for determining an occupant's head location in an actuatable occupant restraining system.
Florent Raoul (Valenton FRX) Lelong Pierre (Nogent-Sur-marne FRX), Method and device for processing an image in order to construct from a source image a target image with charge of perspe.
Maes Pattie E. (Somerville MA) Blumberg Bruce M. (Pepperell MA) Darrell Trevor J. (Cambridge MA) Starner Thad E. (Somerville MA) Johnson Michael P. (Cambridge MA) Russell Kenneth B. (Boston MA) Pentl, Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual e.
Matey James R. (Mercerville NJ) Aceti John G. (Cranbury NJ) Pletcher Timothy A. (East Hampton NJ), Method and system for object detection for instrument control.
Okuda, Nobuya; Kobayashi, Tatsuya; Fujimoto, Hirofumi; Matsuyama, Shigenobu, Method for controlling movement of viewing point of simulated camera in 3D video game, and 3D video game machine.
Wergen, Gerhard; Franz, Klaus, Method for transferring characters especially to a computer and an input device which functions according to this method.
Kobayashi Hiroshi (3-15 Hanakoganei Kodaira-shi ; Tokyo JPX) Machida Haruhiko (10-7 Nakaochiai 4-chome Shinjuki-ku ; Tokyo JPX) Ema Hideaki (Shizuoka JPX) Akedo Jun (Tokyo JPX), Method of measuring the amount of movement of an object having uniformly periodic structure.
Everett ; Jr. Hobart R. (San Diego CA) Gilbreath Gary A. (San Diego CA) Laird Robin T. (San Diego CA), Navigational control system for an autonomous vehicle.
Elko Gary W. (Summit NJ) Sondhi Man M. (Berkeley Heights NJ) West James E. (Plainfield NJ), Noise reduction processing arrangement for microphone arrays.
Levine, Bruce M.; Wirth, Allan; Knowles, C. Harry, OPHTHALMIC INSTRUMENT WITH ADAPTIVE OPTIC SUBSYSTEM THAT MEASURES ABERRATIONS (INCLUDING HIGHER ORDER ABERRATIONS) OF A HUMAN EYE AND THAT PROVIDES A VIEW OF COMPENSATION OF SUCH ABERRATIONS TO THE H.
Podoleanu, Adrian Gh.; Jackson, David A.; Rogers, John A.; Dobre, George M.; Cucu, Radu G., Optical mapping apparatus with adjustable depth resolution and multiple functionality.
Lake Royden J. (Armidale AUX) Moore John C. (Armidale AUX) Kowald Errol M. (Armidale AUX) Doerr Annegret (Armidale AUX), Optically readable coded target.
Marks, Richard L., Prop input device and method for mapping an object from a two-dimensional camera image to a three-dimensional space for controlling action in a game program.
Krueger Myron W. (55 Edith Rd. Vernon CT 06066) Hinrichsen Katrin (81 Willington Oaks Storrs CT 06268) Gionfriddo Thomas S. (81 Willington Oaks Storrs CT 06268), Real time perception of and response to the actions of an unencumbered participant/user.
Mark John G. (Pasadena CA) Tazartes Daniel A. (West Hills CA) Ebner Robert E. (Tarzana CA) Dahlen Neal J. (Freiburg CA DEX) Datta Nibir K. (West Hills CA), Ring laser gyroscope enhanced resolution system.
Yen, Wei; Wright, Ian; Tu, Xiaoyuan; Reynolds, Stuart; Powers, III, William Robert; Musick, Charles; Funge, John; Dobson, Daniel; Bererton, Curt, Self-contained inertial navigation system for interactive control using movable controllers.
Addeo Eric J. (Long Valley NJ) Robbins John D. (Denville NJ) Shtirmer Gennady (Morris Plains NJ), Sound localization system for teleconferencing using self-steering microphone arrays.
Chang Bay-Wei W. ; Fishkin Kenneth P. ; Harrison Beverly L. ; Igarashi Takeo,JPX ; Mackinlay Jock D. ; Want Roy ; Zellweger Polle T., Spinning as a morpheme for a physical manipulatory grammar.
Dengler,John D.; Garci,Erik J.; Cox,Brian C.; Tolman,Kenneth T.; Weber,Hans X.; Hall,Gerard J., System and method for inserting content into an image sequence.
Lyons Damian M., System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs.
Stam, Joseph S.; Bechtel, Jon H.; Reese, Spencer D.; Roberts, John K.; Tonar, William L.; Poe, G. Bruce; Newhouse, Douglas J., System for controlling exterior vehicle lights.
Stam, Joseph S.; Bechtel, Jon H.; Reese, Spencer D.; Roberts, John K.; Tonar, William L.; Poe, G. Bruce; Newhouse, Douglas J., System for controlling exterior vehicle lights.
Wang John Y. A. (Cambridge MA) Adelson Edward H. (Cambridge MA), System for encoding image data into multiple layers representing regions of coherent motion and associated motion parame.
Freeman William T. ; Leventon Michael E., System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence.
Oishi, Toshimitsu; Okubo, Toru; Domitsu, Hideyuki; Yamano, Tomoya, Video game apparatus, method and recording medium storing program for controlling viewpoint movement of simulated camera in video game.
Sawano, Takao; Matsuoka, Hirofumi; Endo, Takashi, Video game system for capturing images and applying the captured images to animated game play characters.
Bouton Frank M. (Beaverton OR) Kaminsky Stephen T. (Salem OR), Video pinball machine controller having an optical accelerometer for detecting slide and tilt.
Fishkin Kenneth P. ; Goldberg David ; Gujar Anuj Uday ; Harrison Beverly L. ; Mynatt Elizabeth D. ; Stone Maureen C. ; Want Roy, Zoomorphic computer user interface.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.