TOUCH AND SOCIAL CUES AS INPUTS INTO A COMPUTER
원문보기
IPC분류정보
국가/구분
United States(US) Patent
공개
국제특허분류(IPC7판)
G06T-019/00
G06Q-010/10
G02B-027/01
G06F-003/01
G06F-003/03
출원번호
US-0389098
(2016-12-22)
공개번호
US-0103582
(2017-04-13)
발명자
/ 주소
Novak, Christopher Michael
Liu, James
Latta, Stephen
Andrews, Anton O.A.
Maitlen, Craig R.
Martin, Sheridan
출원인 / 주소
MICROSOFT TECHNOLOGY LICENSING, LLC
인용정보
피인용 횟수 :
0인용 특허 :
0
초록▼
A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated
A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).
대표청구항▼
1. A method, comprising: identifying a particular person within a field of view of a mobile device;detecting that a person associated with the mobile device has performed a gesture at a point in time coinciding with an electronically scheduled meeting between the person associated with the mobile de
1. A method, comprising: identifying a particular person within a field of view of a mobile device;detecting that a person associated with the mobile device has performed a gesture at a point in time coinciding with an electronically scheduled meeting between the person associated with the mobile device and the particular person;acquiring virtual data associated with an augmented reality environment displayed to the particular person in response to detecting that the person associated with the mobile device has performed the gesture at the point in time coinciding with the electronically scheduled meeting between the person associated with the mobile device and the particular person; anddisplaying the virtual data using the mobile device. 2. The method of claim 1, wherein: the detecting that the person associated with the mobile device has performed the gesture at the point in time coinciding with the electronically scheduled meeting between the person and the particular person includes acquiring an electronic calendar for the person, the electronic calendar includes the electronically scheduled meeting between the person and the particular person. 3. The method of claim 1, wherein: the gesture is one of a handshake gesture, a hug gesture, or a high-five gesture. 4. The method of claim 1, wherein: the detecting that the person associated with the mobile device has performed the gesture at the point in time includes detecting that the person associated with the mobile device has performed a handshake gesture with the particular person. 5. The method of claim 1, wherein: the detecting that the person associated with the mobile device has performed the gesture at the point in time includes detecting that the person associated with the mobile device has touched the particular person. 6. The method of claim 5, wherein: the detecting that the person associated with the mobile device has touched the particular person includes performing skeletal tracking of the particular person. 7. The method of claim 1, wherein: the acquiring virtual data includes acquiring a plurality of virtual objects displayed within the augmented reality environment. 8. The method of claim 7, further comprising: transmitting other virtual data associated with a second augmented reality environment displayed to the person using the mobile device to a second mobile device displaying the plurality of virtual objects to the particular person. 9. The method of claim 8, wherein: the transmitting the other virtual data to the second mobile device includes transmitting a set of virtual objects displayed using the mobile device at the point in time to the second mobile device. 10. The method of claim 1, wherein: the mobile device comprises a head-mounted display device. 11. An electronic device, comprising: one or more processors configured to identify a particular person within a field of view of the electronic device and determine a first time period corresponding with an electronically scheduled meeting between an end user of the electronic device and the particular person, the one or more processors configured to detect that the end user of the electronic device has performed a gesture during the first time period and acquire virtual data associated with an augmented reality environment displayed to the particular person in response to detection that the end user of the electronic device has performed the gesture during the first time period; anda display configured to display the virtual data subsequent to acquisition of the virtual data. 12. The electronic device of claim 11, wherein: the one or more processors configured to acquire an electronic calendar for the end user of the electronic device, the electronic calendar includes the electronically scheduled meeting between the end user and the particular person. 13. The electronic device of claim 11, wherein: the gesture is one of a handshake gesture, a hug gesture, or a high-five gesture. 14. The electronic device of claim 11, wherein: the one or more processors configured to detect that the end user has performed a handshake gesture with the particular person. 15. The electronic device of claim 11, wherein: the one or more processors configured to detect that the end user has touched the particular person. 16. The electronic device of claim 15, wherein: the one or more processors configured to perform skeletal tracking of the particular person and detect that the end user has touched the particular person based on the skeletal tracking. 17. The electronic device of claim 11, wherein: the one or more processors configured to transmit other virtual data associated with a second augmented reality environment displayed to the end user to a second mobile device displaying the virtual data to the particular person. 18. The electronic device of claim 17, wherein: the one or more processors configured to transmit a set of virtual objects displayed using the electronic device at the point in time to the second mobile device. 19. The electronic device of claim 11, wherein: the electronic device comprises a head-mounted display device; and the display comprises a see-through display. 20. One or more storage devices containing processor readable code for programming one or more processors to perform a method, the processor readable code comprising: processor readable code configured to identify a particular person within a field of view of a mobile device;processor readable code configured to determine a first time period corresponding with an electronically scheduled meeting between an end user of the mobile device and the particular person;processor readable code configured to detect that the end user of the mobile device has performed a gesture during the first time period, the gesture is one of a handshake gesture, a hug gesture, or a high-five gesture;processor readable code configured to acquire virtual data associated with an augmented reality environment displayed to the particular person in response to detecting that the end user of the mobile device has performed the gesture during the first time period; andprocessor readable code configured to display the virtual data using the mobile device.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.