초록
▼
Technologies described herein provide a mixed environment display of attached control elements. The techniques disclosed herein enable users of a first computing device to interact with a remote computing device configured to control an object, such as a light, appliance, or any other suitable object. Configurations disclosed herein enable the first computing device to cause one or more actions, such as a selection of the object or the display of a user interface, by capturing and analyzing input data defining the performance of one or more gestures, suc...
Technologies described herein provide a mixed environment display of attached control elements. The techniques disclosed herein enable users of a first computing device to interact with a remote computing device configured to control an object, such as a light, appliance, or any other suitable object. Configurations disclosed herein enable the first computing device to cause one or more actions, such as a selection of the object or the display of a user interface, by capturing and analyzing input data defining the performance of one or more gestures, such as a user looking at the object controlled by the second computing device. Rendered graphical elements configured to enable the control of the object can be displayed with a real-world view of the object.
대표
청구항
▼
1. A first computing device, comprising: a processor;a hardware display surface;one or more input devices;a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the first computing device toobtain control data defining one or more commands configured to cause an execution of a second set of computer-executable instructions at a second computing device, wherein the second computing device is configured to interact with an object,analyze image data from the one or more input devices to identify a phys...
1. A first computing device, comprising: a processor;a hardware display surface;one or more input devices;a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the first computing device toobtain control data defining one or more commands configured to cause an execution of a second set of computer-executable instructions at a second computing device, wherein the second computing device is configured to interact with an object,analyze image data from the one or more input devices to identify a physical characteristic of the object,obtain identification data indicating an association between a network address of the second computing device and the physical characteristic of the object,establish a connection with the second computing device using the network address in response to identifying the physical characteristic of the object,obtain status data indicating a status associated with the second computing device or the object,cause a display of one or more graphical elements comprising the status data and the one or more commands on the hardware display surface, wherein the hardware display surface is configured to display the one or more graphical elements with a real-world view of the object through a transparent section of the hardware display surface,capture a gesture of at least a portion of a hand, performed by a user, wherein the gesture is viewable by the user through the transparent section in the transparent section of the hardware display surface, and wherein the gesture is viewable by the user with the real-world view of the object,select at least one of the one or more graphical elements based on the gesture, andcommunicate the one or more commands to the second computing device in response to the selection of the one or more commands detected by the one or more input devices, wherein the communication of the one or more commands causes an execution of at least a portion of the second set of computer-executable instructions at the second computing device. 2. The computing device of claim 1, wherein the execution of at least the portion of the second set of computer-executable instructions at the second computing device controls at least one component of the object. 3. The computing device of claim 1, wherein the computer-executable instructions further cause the first computing device to: select the object based, at least in part, on data defining the identified physical characteristic; andinitiate the connection in response to the selection of the object. 4. The computing device of claim 1, wherein the computer-executable instructions further cause the first computing device to: select the object based, at least in part, on data defining the identified physical characteristic of the object; andcommunicate a request for the control data to the second computing device in response to the selection of the object, wherein the request is communicated by the use of the network address of the second computing device. 5. The computing device of claim 1, wherein the computer-executable instructions further cause the first computing device to: receive data defining a location of the second computing device;receive data defining a location of the first computing device;receive gaze direction data defining a direction of a field of view of the hardware display surface; andselect the object based, at least in part, on the data defining the location of the second computing device, the data defining the location of the first computing device, and the gaze direction data, and wherein the display of the one or more graphical elements is caused in response to the selection of the object. 6. The computing device of claim 5, wherein the computer-executable instructions further cause the first computing device to determine if the gaze direction data indicates that the field of view is directed toward the object, and wherein the second computing device is selected if the gaze direction data indicates that the field of view is directed toward the object. 7. The computing device of claim 5, wherein the computer-executable instructions further cause the first computing device to: determine if the first computing device is within a predetermined distance from the second computing device; andselect the object if the data defining the location of the second computing device and the data defining the location of the first computing device indicate that the first computing device is within the predetermined distance from the second computing device, and wherein the display of the one or more graphical elements is caused in response to the selection of the object. 8. The computing device of claim 1, wherein the selection of the one or more commands detected by the one or more input devices comprises interpreting an input signal indicating the selection of the one or more commands. 9. A computer-implemented method, comprising: obtaining, at a first computing device, control data defining one or more commands configured to cause an execution of a second set of computer-executable instructions at a second computing device, wherein the second computing device is configured to interact with an object,analyzing image data from the one or more input devices to identify a physical characteristic of the object,obtaining identification data indicating an association between a network address of the second computing device and the physical characteristic of the object,establishing a connection with the second computing device using the network address in response to identifying the physical characteristic of the object,causing a display of one or more graphical elements comprising the one or more commands on a hardware display surface associated with the first computing device, wherein the hardware display surface is configured to display the one or more graphical elements with a real-world view of the object through a transparent section of the hardware display surface, the real-world view of the object provided by a camera generating a video feed of an environment around the first computing device,capturing a gesture of at least a portion of a hand, performed by a user, wherein the gesture is viewable by the user through the transparent section in the transparent section of the hardware display surface, and wherein the gesture is viewable by the user with the real-world view of the object,selecting at least one of the one or more graphical elements based on the gesture, andcommunicating the one or more commands from the first computing device to the second computing device in response to the selection of the one or more commands detected by the one or more input devices, wherein the communication of the one or more commands causes an execution of at least a portion of the second set of computer-executable instructions at the second computing device. 10. The method of claim 9, wherein the one or more graphical elements are configured to indicate an association between the real-world view of the object and the display of the one or more graphical elements comprising the one or more commands. 11. The method of claim 9, wherein the one or more graphical elements are configured to indicate an association between a rendering of a component of object and the display of the one or more graphical elements comprising the one or more commands. 12. The method of claim 9, further comprising, in response to the selection of the one or more commands, causing a modification of the one or more graphical elements to indicate an association between a real-world view of a component of the object and the display of the one or more graphical elements comprising the one or more commands. 13. The method of claim 9, further comprising, in response to the selection of the one or more commands, causing a modification of the one or more graphical elements to indicate an association between the real-world view of the object and the display of the one or more graphical elements comprising the one or more commands. 14. A first computing device, comprising: a processor;a hardware display surface;one or more input devices;a memory having computer-executable instructions stored thereupon which, when executed by the processor, cause the first computing device toobtain control data defining one or more commands configured to cause an execution of a second set of computer-executable instructions at one or more controller devices, wherein the one or more controller devices are configured to interact with one or more objects,analyze image data from the one or more input devices to identify a physical characteristic of the one or more objects,obtain identification data indicating an association between a network address of the second computing device and the physical characteristic of the one or more objects,establish a connection with the second computing device using the network address in response to identifying the physical characteristic of the one or more objects,cause a display of one or more graphical elements comprising the one or more commands on the hardware display surface, wherein the display further comprises a rendering of a real-world view of the one or more objects,capture a gesture performed by a user, wherein the gesture is viewable by the user through the transparent section in the transparent section of the hardware display surface, and wherein the gesture is viewable by the user with the real-world view of the one or more objects,select at least one of the one or more graphical elements based on the gesture, andcommunicate the one or more commands to the one or more controller devices in response to the selection of the one or more commands detected by the one or more input devices, wherein the communication of the one or more commands causes an execution of at least a portion of the second set of computer-executable instructions at the one or more controller devices. 15. The computing device of claim 14, wherein the execution of at least the portion of the second set of computer-executable instructions at the one or more controller devices controls at least one component of the one or more objects. 16. The computing device of claim 15, wherein the computer-executable instructions further cause the first computing device to: select the one or more objects based, at least in part, on data defining the identified physical characteristic of the one or more objects, wherein the display of the one or more graphical elements is caused in response to the selection of the one or more objects, and wherein the communication of the one or more commands uses the network address. 17. The computing device of claim 14, wherein the computer-executable instructions further cause the first computing device to: select the one or more objects based, at least in part, on data defining the identified physical characteristic of the one or more objects, wherein the display of the one or more graphical elements is caused in response to the selection of the one or more objects, and wherein the communication of the one or more commands is based, at least in part, on the network address. 18. The computing device of claim 14, wherein the computer-executable instructions further cause the first computing device to: receive data defining a location of the one or more objects;receive data defining a location of the first computing device;receive gaze direction data defining a direction of a field of view of the hardware display surface; andselect the one or more objects based, at least in part, on the data defining the location of the one or more objects, the data defining the location of the first computing device, and the gaze direction data, and wherein the display of the one or more graphical elements is caused in response to the selection of the one or more objects. 19. The computing device of claim 18, wherein the computer-executable instructions further cause the first computing device to utilize data caused by the one or more input sensors to determine if the gaze direction data indicates that the field of view is directed toward the one or more objects, and wherein the one or more objects is selected if the gaze direction data indicates that the field of view is directed toward the one or more objects. 20. The computing device of claim 18, wherein the computer-executable instructions further cause the first computing device to: determine if the first computing device is within a predetermined distance from the one or more objects; andselect the one or more objects if the data defining the location of the one or more objects and the data defining the location of the first computing device indicate that the first computing device is within the predetermined distance from the one or more objects, and wherein the display of the one or more graphical elements is caused in response to the selection of the one or more objects.