Input can be provided to a computing device based upon relative movement of a user or other object with respect to the device. In some embodiments, infrared radiation is used to determine measurable aspects of the eyes or other of a user. Since the human retina is a retro-reflector for certain wavel
Input can be provided to a computing device based upon relative movement of a user or other object with respect to the device. In some embodiments, infrared radiation is used to determine measurable aspects of the eyes or other of a user. Since the human retina is a retro-reflector for certain wavelengths, using two different wavelengths or two measurement angles can allow user pupils to be quickly located and measured without requiring resource-intensive analysis of full color images captured using ambient light, which can be important for portable, low power, or relatively inexpensive computing devices. Various embodiments provide differing levels of precision and design that can be used with different devices.
대표청구항▼
1. A computer-implemented method of enabling a user to provide input to an electronic device, comprising: under control of one or more computing systems configured with executable instructions, capturing at least one ambient light image including at least a portion of a user of the electronic device
1. A computer-implemented method of enabling a user to provide input to an electronic device, comprising: under control of one or more computing systems configured with executable instructions, capturing at least one ambient light image including at least a portion of a user of the electronic device;determining an approximate head position of the user with respect to the electronic device based at least in part upon the at least one ambient light image;capturing a first image including infrared (IR) light using a first infrared sensor, the IR light being emitted by an infrared source of the electronic device and being reflected by at least one retina of the user, wherein the first infrared sensor is positioned substantially adjacent to the infrared source;capturing, concurrently with the first image, a second image using a second infrared sensor positioned a distance away from the infrared source on the electronic device, the second image not including any of the IR light emitted by the infrared source and reflected from the at least one retina;selecting a first portion of the first image and a second portion of the second image, each of the first portion and the second portion corresponding to the approximate head position detected in the at least one ambient light image;comparing corresponding intensity values between the first portion and the second portion to determine a relative position of the at least one retina of the user;determining a relative orientation of the user with respect to the electronic device based at least in part upon the determined relative position of the at least one retina; andbased at least in part upon the relative orientation, providing a corresponding input to the electronic device. 2. The computer-implemented method of claim 1, further comprising: attempting to locate the second portion in the second image that corresponds to the first portion in the first image,wherein corresponding intensity values between the second portion and the first portion are compared and the approximate distance determined only if matching portions are located. 3. The computer-implemented method of claim 1, wherein the electronic device includes first and second infrared sources and the first image and the second image are captured using a single sensor. 4. A computer-implemented method of enabling a user to provide input to an electronic device, comprising: under control of one or more computing systems configured with executable instructions, capturing at least one ambient light image including at least a portion of a user of the electronic device;determining an approximate head position of the user with respect to the electronic device based at least in part upon the at least one ambient light image;capturing a first image of the user using a first sensor, the first sensor being positioned substantially adjacent to a radiation emitter, the first image including radiation reflected from at least one feature of the user, the radiation having a wavelength outside a visible spectrum of a human and emitted by the radiation emitter on the electronic device;capturing, concurrently with the first image, a second image of the user using a second sensor positioned a distance away from the radiation emitter on the electronic device, the second image including substantially none of the radiation reflected from the at least one feature of the user;selecting a first portion of the first image and a second portion of the second image, each of the first portion and the second portion corresponding to the approximate head position detected in the at least one ambient light image;comparing corresponding pixel values between the first portion and the second portion to determine a relative location of the at least one feature of the user; andbased at least in part upon the relative location of the at least one feature, providing input to electronic device. 5. The computer-implemented method of claim 4, wherein the electronic device includes first and second radiation emitters and the first image and the second image are captured using a single radiation sensor. 6. The computer-implemented method of claim 5, wherein the first radiation emitter is positioned substantially adjacent the single radiation sensor, whereby the single radiation sensor is able to detect retro-reflected radiation from the first radiation emitter, and the second radiation emitter is positioned a distance away from the radiation sensor, whereby the radiation sensor is substantially unable to detect retro-reflected radiation from the second radiation emitter. 7. The computer-implemented method of claim 5, wherein the first radiation emitter emits radiation within a first range of wavelengths capable of being substantially reflected by a human eye, and wherein the second radiation emitter emits radiation within a second range of wavelengths capable of being substantially absorbed by the human eye. 8. The computer-implemented method of claim 7, wherein the first range of wavelengths is less than about 940 nm, and the second range of wavelengths is greater than about 940 nm. 9. The computer-implemented method of claim 5, wherein the first image and the second image are portions of a single image captured by the single radiation sensor. 10. The computer-implemented method of claim 9, further comprising: providing a wavelength selective filter array whereby the single radiation sensor is capable of capturing an image including regions in a first range of wavelengths and regions in a second range of wavelengths. 11. The computer-implemented method of claim 5, further comprising: triggering the first and second radiation emitters such that reflected light for each radiation emitter is captured on alternating scan lines of a single image. 12. The computer-implemented method of claim 4, wherein the at least one feature of the user comprises eyes of a user, and wherein the radiation is infrared radiation. 13. The computer-implemented method of claim 4, wherein the first image and the second image each contain at least a portion of multiple persons, and wherein the input to the electronic device is based at least in part upon information determined from the first image and the second image for the at least the portion of the multiple persons. 14. The computer-implemented method of claim 4, further comprising: determining an approximate distance to the at least one feature of the user, wherein the input provided to the electronic device is further based upon the approximate distance. 15. The computer-implemented method of claim 14, wherein the approximate distance to the at least one feature is determined using parallax information determined from the first image and the second image. 16. The computer-implemented method of claim 14, wherein the approximate distance to the at least one feature is determined using at least one ultrasonic element. 17. The computer-implemented method of claim 14, wherein the approximate distance to the at least one feature is determined by tracking a relative size of the at least one feature in subsequent images. 18. The computer-implemented method of claim 14, wherein the approximate distance to the at least one feature is determined by monitoring a focal length of at least one optical element of the electronic device substantially focusing on the at least one feature. 19. The computer-implemented method of claim 4, further comprising: matching at least a portion of the first image and the second image in order to determine a relative offset between the first image and the second image. 20. The computer-implemented method of claim 19, wherein the matching is performed using a sliding window, the sliding window comprising a distinctive portion of one of the first image and the second image. 21. The computer-implemented method of claim 4, further comprising: capturing at least one initial image of at least a portion of the user of the electronic device using an ambient light camera in order to determine an approximate location of the user. 22. A computing device, comprising: a processor; anda memory device including instructions that, when executed by the processor, cause the processor to: capture at least one ambient light image including at least a portion of a user of the computing device;determine an approximate head position of the user with respect to the computing device based at least in part upon the at least one ambient light image;capture a first image of the user using a first sensor, the first sensor being positioned substantially adjacent to a radiation emitter, the first image including radiation reflected from at least one feature of the user, the radiation having a wavelength outside a visible spectrum of a human and emitted by the radiation emitter on the computing device;capture, concurrently with the first image, a second image of the user using a second sensor positioned a distance away from the radiation emitter on the computing device, the second image including substantially none of the radiation reflected from the at least one feature of the user;select a first portion of the first image and a second portion of the second image corresponding to the approximate head position detected in the at least one ambient light image;compare corresponding pixel values between the first portion and the second portion to determine a relative location of the at least one feature of the user; andbased at least in part upon the relative location of the at least one feature, provide input to computing device. 23. The computing device of claim 22, wherein the computing device includes first and second radiation sources and the first image and the second image are captured using a single radiation sensor. 24. The computing device of claim 22, wherein a first radiation source is positioned substantially adjacent to a radiation sensor, whereby the radiation sensor is able to detect retro-reflected radiation from the first radiation source, and a second radiation source is positioned a distance away from the radiation sensor, whereby the radiation sensor is substantially unable to detect retro-reflected radiation from the second radiation source. 25. The computing device of claim 22, wherein a first radiation source emits radiation within a first range of wavelengths capable of being reflected by a human retina, and a second radiation source emits radiation within a second range of wavelengths capable of being absorbed by a human cornea. 26. A non-transitory computer-readable storage medium storing instructions for enabling a user to provide input to a computing device, the instructions when executed by a processor causing the processor to: capture at least one ambient light image including at least a portion of a user of the computing device;determine an approximate head position of the user with respect to the computing device based at least in part upon the at least one ambient light image;capture a first image of the user using a first sensor, the first sensor being positioned substantially adjacent to a source, the first image including radiation reflected from at least one feature of the user, the radiation having a wavelength outside a visible spectrum of a human and emitted by the source on the computing device;capture, concurrently with the first image, a second image of the user using a second sensor positioned a distance away from the source on the computing device, the second image including substantially none of the radiation reflected from the at least one feature of the user;select a first portion of the first image and a second portion of the second image corresponding to the approximate head position detected in the at least one ambient light image;compare corresponding pixel values between the first portion and the second portion to determine a relative location of the at least one feature of the user; andbased at least in part upon the relative location of the at least one feature, provide input to the computing device. 27. The non-transitory computer-readable storage medium of claim 26, wherein the computing device includes first and second radiation sources and the first image and the second image are captured using a single radiation sensor. 28. The non-transitory computer-readable storage medium of claim 26, wherein a first radiation source is positioned substantially adjacent a radiation sensor, whereby the radiation sensor is able to detect retro-reflected radiation from the first radiation source, and a second radiation source is positioned a distance away from the radiation sensor, whereby the radiation sensor is substantially unable to detect retro-reflected radiation from the second radiation source. 29. The non-transitory computer-readable storage medium of claim 26, wherein the first radiation source emits radiation within a first range of wavelengths capable of being reflected by a human retina, and the second radiation source emits radiation within a second range of wavelengths capable of being absorbed by a human cornea.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (29)
Pryor,Timothy R., Camera based man machine interfaces.
Nakamura, Hiroki; Nakamura, Takashi; Hayashi, Hirotaka; Tada, Norio; Imai, Takayuki, Liquid crystal display device achieving imaging with high S/N ratio using invisible light.
Bill McKinnon ; Tim Newhouse ; Eric Rustici, Method and apparatus for representing objects as visually discernable entities based on spatial definition and perspective.
Maes Pattie E. (Somerville MA) Blumberg Bruce M. (Pepperell MA) Darrell Trevor J. (Cambridge MA) Starner Thad E. (Somerville MA) Johnson Michael P. (Cambridge MA) Russell Kenneth B. (Boston MA) Pentl, Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual e.
Okuda, Nobuya; Kobayashi, Tatsuya; Fujimoto, Hirofumi; Matsuyama, Shigenobu, Method for controlling movement of viewing point of simulated camera in 3D video game, and 3D video game machine.
Dehlin, Joel P.; Chen, Christina Summer; Wilson, Andrew D.; Robbins, Daniel C.; Horvitz, Eric J.; Hinckley, Kenneth P.; Wobbrock, Jacob O., Recognizing gestures and using gestures for interacting with software applications.
Leech, Jonathan Alan; Pinckernell, Nicholas Adam; Monnerat, Edward David; Robinson, Chris; Johnson, Derek, Adaptive resolution in software applications based on dynamic eye tracking.
Ding, Yi; Mangiat, Stephen Vincent; Cheng, Peter; Karakotsios, Kenneth Mark; Sommer, Steven Michael; Schiller, Peter Andrew, Approaches for controlling a computing device based on head movement.
Strombom, Johan; Skogo, Marten; Nystedt, Per; Gustafsson, Simon; Elvesjo, John Mikael Holtz; Blixt, Peter, Method for displaying gaze point data based on an eye-tracking unit.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.