Method, system and device for navigating in a virtual reality environment
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06T-015/00
G06T-019/00
G06F-003/01
G06F-003/0481
출원번호
US-0975486
(2015-12-18)
등록번호
US-9659413
(2017-05-23)
발명자
/ 주소
Grossinger, Nadav
Alon, Emil
Bar David, Iddo
Solodnik, Efrat
출원인 / 주소
Facebook, Inc.
대리인 / 주소
Fenwick & West LLP
인용정보
피인용 횟수 :
2인용 특허 :
5
초록▼
A method, a system, and a device for navigating in a virtual reality scene, using body parts gesturing and posturing are provided herein. The method may include: projecting a synthetic 3D scene, into both eyes of a user, via a near eye display, so as to provide a virtual reality view to the user; id
A method, a system, and a device for navigating in a virtual reality scene, using body parts gesturing and posturing are provided herein. The method may include: projecting a synthetic 3D scene, into both eyes of a user, via a near eye display, so as to provide a virtual reality view to the user; identifying at least one gesture or posture carried out by at least one body part of said user; measuring at least one metric of a vector associated with the detected gesture or posture; applying a movement or action of said user in virtual reality environment, based on the measured metrics; and modifying the virtual reality view so as to reflect the movement or action of said user in the virtual reality environment.
대표청구항▼
1. A method comprising: projecting a synthetic 3D scene, into both eyes of a user, so as to provide a virtual reality view to the user;identifying at least one gesture or posture carried out by at least one body part of the user;deriving a vector that spatially represents the identified gesture or p
1. A method comprising: projecting a synthetic 3D scene, into both eyes of a user, so as to provide a virtual reality view to the user;identifying at least one gesture or posture carried out by at least one body part of the user;deriving a vector that spatially represents the identified gesture or posture;transferring the vector into a continuous movement or action of the user in a virtual reality environment, based on at least one of an angle of the vector and a length of the vector; andmodifying the virtual reality view so as to reflect the movement or action of the user in the virtual reality environment. 2. The method of claim 1, wherein the identifying the at least one gesture or posture comprises applying a classifier to a database of predefined postures and gestures. 3. The method of claim 1, wherein the movement or action of the user in the virtual reality environment is carried out continuously as long as the at least one of the angle of the vector and the length of the vector maintains a value beyond a predefined threshold. 4. The method of claim 1, wherein the movement or action of the user in the virtual reality environment is terminated responsive to detecting a predefined termination act. 5. The method of claim 1, wherein the movement or action of the user in the virtual reality environment is terminated responsive to detecting a return of the body part to a basic posture position. 6. The method of claim 1, wherein the identified posture is leaning forward a torso of the user. 7. The method of claim 1, wherein the identified posture is raising shoulders of the user. 8. The method of claim 1, wherein the identified posture is defined by a spatial relationship between at least two body parts of the user, and the method further comprises determining angles between the at least two body parts and transferring the determined angles into the continuous movement or action of the user in the virtual reality environment. 9. The method of claim 1, wherein the vector associated with the identified gesture or posture further comprise a spatial direction angle. 10. The method of claim 1, wherein identifying the least one gesture or posture further comprises subtracting movement component affiliated with head movements of the user, so as to only retrieve postures and gestures of the at least one body part, that are relative to the head of the user. 11. The method of claim 1, further comprising superimposing a virtual user-interface object into the synthetic 3D scene enabling the user to apply gestures and postures relative to the virtual user-interface object. 12. A system comprising: a device configured to project a synthetic 3D scene, into both eyes of a user, so as to provide a virtual reality view to the user; anda computer processor configured to: identify at least one gesture or posture carried out by at least one body part of the user;derive a vector that spatially represents the identified gesture or posture;transfer the vector into a continuous movement or action of the user in a virtual reality environment, based on at least one of an angle of the vector and a length of the vector; andmodify the virtual reality view so as to reflect the movement or action of the user in the virtual reality environment. 13. The system of claim 12, wherein the computer processor is further configured to apply a classifier to a database of predefined postures and gestures to identify the at least one gesture or posture. 14. The system of claim 12, wherein the computer processor is further configured to perform the movement or action of the user in the virtual reality environment continuously as long as the at least one of the angle of the vector and the length of the vector maintains a value beyond a predefined threshold. 15. The system of claim 12, wherein the computer processor is further configured to terminate the movement or action of the user in the virtual reality environment responsive to detection of a predefined termination act. 16. The system of claim 12, wherein the computer processor is further configured to terminate the movement or action of the user in the virtual reality environment responsive to detecting a return of the body part to a basic posture position. 17. The system of claim 12, wherein the identified posture is leaning forward a torso of the user. 18. The system of claim 12, wherein the identified posture is raising shoulders of the user. 19. The system of claim 12, wherein the identified posture is defined by a spatial relationship between at least two body parts of the user, and the computer processor is further configured to determine angles between the at least two body parts and transfer the determined angles into the continuous movement or action of the user in the virtual reality environment. 20. The system of claim 12, wherein the vector associated with the identified gesture or posture further comprise a spatial direction angle. 21. The system of claim 12, wherein the computer processor is further configured to subtract movement component affiliated with head movements of the user, so as to only retrieve postures and gestures of the at least one body part, that are relative to the head of the user. 22. The system of claim 12, wherein the computer processor is further configured to superimpose a virtual user-interface object into the synthetic 3D scene enabling the user to apply gestures and postures relative to the virtual user-interface object.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (5)
Langridge, Adam Jethro; Matthews, Christopher; Simmons, Guy; Molyneux, Peter Douglas, Action selection gesturing.
Border, John N.; Bietry, Joseph; Osterhout, Ralph F., See-through near-eye display glasses including a curved polarizing film in the image source, a partially reflective, partially transmitting optical element and an optically flat film.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.