Input can be provided to a computing device based upon relative movement of a user or other object with respect to the device. In some embodiments, infrared radiation is used to determine measurable aspects of the eyes or other of a user. Since the human retina is a retro-reflector for certain wavel
Input can be provided to a computing device based upon relative movement of a user or other object with respect to the device. In some embodiments, infrared radiation is used to determine measurable aspects of the eyes or other of a user. Since the human retina is a retro-reflector for certain wavelengths, using two different wavelengths or two measurement angles can allow user pupils to be quickly located and measured without requiring resource-intensive analysis of full color images captured using ambient light, which can be important for portable, low power, or relatively inexpensive computing devices. Various embodiments provide differing levels of precision and design that can be used with different devices.
대표청구항▼
1. A computer-implemented method of enabling a user to provide a control input to an electronic device, comprising: under control of one or more computing systems configured with executable instructions, illuminating, by a first infrared (IR) source of the electronic device, at least a portion of a
1. A computer-implemented method of enabling a user to provide a control input to an electronic device, comprising: under control of one or more computing systems configured with executable instructions, illuminating, by a first infrared (IR) source of the electronic device, at least a portion of a face;illuminating, by a second IR source of the electronic device, at least the portion of the face;determining that a third IR sensor is covered, based at least in part on the third IR sensor not receiving reflected IR light;capturing, by a first IR sensor at a first time, a first image including reflected IR light from the first IR source;capturing, by a second IR sensor at the first time, a second image including reflected IR light from the second IR source;determining a first portion of the first image, the first portion including facial features that meet a minimum level of distinctiveness;locating a second portion of the second image, the second portion of the second image corresponding to the first portion of the first image;aligning the first portion of the first image and the second portion of the second image, based at least in part on first information;determining a first position of the face based at least on the first image;determining a second position of the face based at least on the second image;determining a relative position of the face based at least on a difference between the first position of the face and the second position of the face; anddetermining a control input for the electronic device based on the relative position of the face. 2. The computer-implemented method of claim 1, wherein the first information includes information relating to at least a color, a brightness, a hue, a light level, one or more coordinates, or a pixel. 3. The computer-implemented method of claim 1, wherein aligning the first portion of the first image and the second portion of the second image further comprises: identifying a first target portion in the second image based at least in part on one or more coordinates of the first portion of the first image;determining that a first match score between the first portion of the first image and the first target portion in the second image is lower than a threshold match score;identifying a second target portion in the second image;determining that a second match score between the first portion of the first image and the second target portion in the second image is above the threshold match score; andusing the second target portion in the second image as the second portion of the second image. 4. The computer-implemented method of claim 1, wherein determining the first portion of the first image further comprises: determining a first size of the first portion of the first image;determining that the facial features included in the first portion of the first image do not meet the minimum level of distinctiveness;determining a second portion of the first image, the second portion having a second size, the second size larger than the first size;determining that the facial features included in the second portion of the first image meet the minimum level of distinctiveness; andusing the second portion of the first image in place of the first portion of the first image. 5. The computer-implemented method of claim 1, further comprising determining an approximate distance between the electronic device and the portion of the face based at least on the first portion of the first image and the second portion of the second image. 6. The computer-implemented method of claim 1, further comprising: illuminating, by a third IR source of the electronic device, at least the portion of the face; andcapturing, by a fourth IR sensor, a third image including reflected IR light from the third IR source. 7. The computer-implemented method of claim 1, further comprising: capturing, by a camera, an ambient light image including at least the portion of the face;determining an approximate head position of the face with respect to the electronic device based at least in part upon the ambient light image; anddetermining the relative position of the face based at least on the ambient light image, the first portion of the first image and the second portion of the second image. 8. The computer-implemented method of claim 1, further comprising: illuminating, by a third IR source of the electronic device, at least the portion of the face; andcapturing, by the second IR sensor, a third image including reflected IR light from the third IR source. 9. The computer-implemented method of claim 1, further comprising: illuminating, by the first IR source, radiation within a first range of wavelengths capable of being substantially reflected by areas of the face; andilluminating, by the second IR source, radiation with a second range of wavelengths capable of being substantially absorbed by the areas of the face. 10. A computing device, comprising: a first infrared (IR) source;a second IR source;a first IR sensor;a second IR sensor;a third IR sensor;at least one processor; memory including non-transitory instructions that, when executed by the processor, cause the computing device to:illuminate, by the first IR source, at least a portion of a face;determine that the third IR sensor is not receiving reflected IR light;determine, based at least on the third IR sensor not receiving reflected IR light that the third IR sensor is covered;illuminate, by the second IR source, at least the portion of the face;capture, by the first IR sensor at a first time, a first image including reflected IR light from the first IR source;capture, by the second IR sensor at the first time, a second image including reflected IR from the second IR source;determine a first portion of the first image, the first portion including facial features that meet a minimum level of distinctiveness;locate a second portion of the second image, the second portion of the second image corresponding to the first portion of the first image;align the first portion of the first image and the second portion of the second image, based at least in part on first information;determine a first position of the face based at least on the first image;determine a second position of the face based at least on the second image;determine a relative position of the face based at least on a difference between the first position of the face and the second position of the face; anddetermine a control input for the computing device based on the relative position of the face. 11. The computing device of claim 10, wherein the first information includes information relating to at least a color, a brightness, a hue, a light level, one or more coordinates, or a pixel. 12. The computing device of claim 10, wherein the memory further includes non-transitory instructions that cause the computing device to: identify a first target portion in the second image based at least in part on one or more coordinates of the first portion of the first image;determine that a first match score between the first portion of the first image and the first target portion in the second image is lower than a threshold match score;identify a second target portion in the second image;determine that a second match score between the first portion of the first image and the second target portion in the second image is above the threshold match score; anduse the second target portion in the second image as the second portion of the second image. 13. The computing device of claim 10, wherein the memory further includes non-transitory instructions that cause the computing device to: determine a first size of the first portion of the first image;determine that the facial features included in the first portion of the first image do not meet the minimum level of distinctiveness;determine a second portion of the first image, the second portion having a second size, the second size larger than the first size;determine that the facial features included in the second portion of the first image meet the minimum level of distinctiveness; anduse the second portion of the first image in place of the first portion of the first image. 14. The computing device of claim 10, wherein the first IR sensor is positioned substantially adjacent to the first IR source, and the second IR sensor is positioned substantially adjacent to the second IR source. 15. The computing device of claim 10, wherein the first IR source emits radiation within a first range of wavelengths capable of being substantially reflected by areas of the face, and wherein the second IR source emits radiation within a second range of wavelengths capable of being substantially absorbed by the areas of the face. 16. The computing device of claim 10, further comprising: a third IR source,wherein the memory further includes non-transitory instructions that cause the computing device to: illuminate, by the third IR source, at least the portion of the face; andcapture a third image using the first IR sensor, the third image including reflected IR from the third IR source. 17. The computing device of claim 10, wherein the memory further includes non-transitory instructions that cause the computing device to: determine an approximate distance between the computing device and the portion of the face based at least on the first portion of the first image and the second portion of the second image. 18. The computing device of claim 10, further comprising: a third IR source,wherein the memory further includes non-transitory instructions that cause the computing device to:determine that the third IR sensor is not receiving reflected IR light from the third IR source. 19. The computing device of claim 10, further comprising: a camera,wherein the memory further includes non-transitory instructions that cause the computing device to:capture, using the camera, at least one ambient light image including at least the portion of the face; anddetermine an approximate head position of the face with respect to the computing device based at least in part upon the at least one ambient light image.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (56)
Asmussen, Michael L., Advanced set top terminal having a video call feature.
Mahaffey, Robert B.; Weiss, Lawrence F.; Schwerdtfeger, Richard S.; Kjeldsen, Frederik C., Computer system providing hands free user input via optical means for navigation or zooming.
Osako,Satoru; Morimura,Naoya, Game machine and game program for displaying a first object casting a shadow formed by light from a light source on a second object on a virtual game space.
Nakamura, Hiroki; Nakamura, Takashi; Hayashi, Hirotaka; Tada, Norio; Imai, Takayuki, Liquid crystal display device achieving imaging with high S/N ratio using invisible light.
Bill McKinnon ; Tim Newhouse ; Eric Rustici, Method and apparatus for representing objects as visually discernable entities based on spatial definition and perspective.
Guenter Brian ; Grimm Cindy Marie ; Malvar Henrique Sarmento, Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects.
Maes Pattie E. (Somerville MA) Blumberg Bruce M. (Pepperell MA) Darrell Trevor J. (Cambridge MA) Starner Thad E. (Somerville MA) Johnson Michael P. (Cambridge MA) Russell Kenneth B. (Boston MA) Pentl, Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual e.
Okuda, Nobuya; Kobayashi, Tatsuya; Fujimoto, Hirofumi; Matsuyama, Shigenobu, Method for controlling movement of viewing point of simulated camera in 3D video game, and 3D video game machine.
Wehrenberg, Paul J.; Leiba, Aaron; Williams, Richard C.; Falkenburg, David R.; Gerbarg, Louis G.; Chang, Ray L., Methods and apparatuses for operating a portable device based on an accelerometer.
Dehlin, Joel P.; Chen, Christina Summer; Wilson, Andrew D.; Robbins, Daniel C.; Horvitz, Eric J.; Hinckley, Kenneth P.; Wobbrock, Jacob O., Recognizing gestures and using gestures for interacting with software applications.
Cortez, Paulo Cesar; Costa, Rodrigo Carvalho Souza; Siqueira, Robson Da Silva; Neto, Cincinato Furtado Leite; Ribeiro, Fabio Cisne; Anselmo, Francisco Jose Marques; Carvalho, Raphael Torres Santos; Barros, Antonio Carlos Da Silva; Mattos, Cesar Lincoln Cavalcante; Soares, Jose Marques, Systems and methods for synthesis of motion for animation of virtual heads/characters via voice processing in portable devices.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.