Method, system and device for navigating in a virtual reality environment
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-015/00
G06T-019/00
G06F-003/01
G06F-003/0481
출원번호
US-0433952
(2017-02-15)
등록번호
US-9972136
(2018-05-15)
발명자
/ 주소
Grossinger, Nadav
Alon, Emil
Bar David, Iddo
Solodnik, Efrat
출원인 / 주소
Facebook, Inc.
대리인 / 주소
Fenwick & West LLP
인용정보
피인용 횟수 :
0인용 특허 :
10
초록▼
A method, a system, and a device for navigating in a virtual reality scene, using body parts gesturing and posturing are provided herein. The method may include: projecting a synthetic 3D scene, into both eyes of a user, via a near eye display, so as to provide a virtual reality view to the user; id
A method, a system, and a device for navigating in a virtual reality scene, using body parts gesturing and posturing are provided herein. The method may include: projecting a synthetic 3D scene, into both eyes of a user, via a near eye display, so as to provide a virtual reality view to the user; identifying at least one gesture or posture carried out by at least one body part of said user; measuring at least one metric of a vector associated with the detected gesture or posture; applying a movement or action of said user in virtual reality environment, based on the measured metrics; and modifying the virtual reality view so as to reflect the movement or action of said user in the virtual reality environment.
대표청구항▼
1. A method comprising: displaying a synthetic 3D scene;identifying at least one gesture or posture carried out by at least one body part of the user;deriving a vector that spatially represents the identified gesture or posture;transforming the vector into a continuous movement or action in the synt
1. A method comprising: displaying a synthetic 3D scene;identifying at least one gesture or posture carried out by at least one body part of the user;deriving a vector that spatially represents the identified gesture or posture;transforming the vector into a continuous movement or action in the synthetic 3D scene, based on at least one of an angle of the vector and a length of the vector, wherein a speed of the movement is based on a value of the angle; andmodifying the synthetic 3D scene to reflect the movement or action in the synthetic 3D scene. 2. The method of claim 1, wherein the identifying the at least one gesture or posture comprises applying a classifier to a database of predefined postures and gestures. 3. The method of claim 1, wherein the movement or action in the synthetic 3D scene is carried out continuously as long as the at least one of the angle of the vector and the length of the vector maintains a value beyond a predefined threshold. 4. The method of claim 1, wherein the movement or action in the synthetic 3D scene is terminated responsive to detecting a predefined termination act. 5. The method of claim 1, wherein the movement or action in the synthetic 3D scene is terminated responsive to detecting a return of the at least one body part to a basic posture position. 6. The method of claim 1, wherein the identified posture is leaning forward a torso of the user or raising shoulders of the user. 7. The method of claim 1, wherein the identified posture is defined by a spatial relationship between at least two body parts of the user, and the method further comprises determining angles between the at least two body parts and transforming the determined angles into the continuous movement or action in the synthetic 3D scene. 8. The method of claim 1, wherein the vector associated with the identified gesture or posture further comprises a spatial direction angle. 9. The method of claim 1, wherein identifying the least one gesture or posture further comprises subtracting movement components affiliated with head movements of the user to retrieve postures and gestures of the at least one body part that are relative to the head of the user. 10. The method of claim 1, further comprising superimposing a virtual user-interface object into the synthetic 3D scene enabling the user to apply gestures and postures relative to the virtual user-interface object. 11. A system comprising: a device configured to display a synthetic 3D scene; anda computer processor configured to: identify at least one gesture or posture carried out by at least one body part of the user;derive a vector that spatially represents the identified gesture or posture;transform the vector into a continuous movement or action in the synthetic 3D scene, based on at least one of an angle of the vector and a length of the vector, wherein a speed of the movement is based on a value of the angle; andmodify the synthetic 3D scene to reflect the movement or action in the synthetic 3D scene. 12. The system of claim 11, wherein the computer processor is further configured to apply a classifier to a database of predefined postures and gestures to identify the at least one gesture or posture. 13. The system of claim 11, wherein the computer processor is further configured to perform the movement or action in the synthetic 3D scene continuously as long as the at least one of the angle of the vector and the length of the vector maintains a value beyond a predefined threshold. 14. The system of claim 11, wherein the computer processor is further configured to terminate the movement or action in the synthetic 3D scene responsive to detection of a predefined termination act. 15. The system of claim 11, wherein the computer processor is further configured to terminate the movement or action in the synthetic 3D scene responsive to detecting a return of the at least one body part to a basic posture position. 16. The system of claim 11, wherein the identified posture is leaning forward a torso of the user or raising shoulders of the user. 17. The system of claim 11, wherein the identified posture is defined by a spatial relationship between at least two body parts of the user, and the computer processor is further configured to determine angles between the at least two body parts and transform the determined angles into the continuous movement or action in the synthetic 3D scene. 18. The system of claim 11, wherein the vector associated with the identified gesture or posture further comprises a spatial direction angle. 19. The system of claim 11, wherein the computer processor is further configured to subtract movement components affiliated with head movements of the user to retrieve postures and gestures of the at least one body part that are relative to the head of the user. 20. The system of claim 11, wherein the computer processor is further configured to superimpose a virtual user-interface object into the synthetic 3D scene enabling the user to apply gestures and postures relative to the virtual user-interface object.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (10)
Langridge, Adam Jethro; Matthews, Christopher; Simmons, Guy; Molyneux, Peter Douglas, Action selection gesturing.
Border, John N.; Bietry, Joseph; Osterhout, Ralph F., See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.