The use of resources on a computing device can be optimized for current conditions to reduce or minimize the amount of resources needed to provide a sufficient level of performance for various types of tasks. In some embodiments, one or more optimization algorithms can analyze information such as a
The use of resources on a computing device can be optimized for current conditions to reduce or minimize the amount of resources needed to provide a sufficient level of performance for various types of tasks. In some embodiments, one or more optimization algorithms can analyze information such as a type of task to be performed and the state of one or more environmental conditions to attempt to select a number and configuration of components, such as cameras and illumination elements, to use in performing the tasks. The performance of the tasks can be monitored, and the selection or states updated in order to maintain a sufficient level of performance without using more resources than are necessary.
대표청구항▼
1. A computer-implemented method of selecting resources to perform gesture recognition on a computing device, comprising: under control of one or more computer systems configured with executable instructions, activating gesture detection on the computing device, the computing device including a plur
1. A computer-implemented method of selecting resources to perform gesture recognition on a computing device, comprising: under control of one or more computer systems configured with executable instructions, activating gesture detection on the computing device, the computing device including a plurality of cameras and at least one illumination element;determining a state of at least one environmental condition pertaining to performance of the gesture detection;selecting a minimum number of cameras of the plurality of cameras and a minimum number of illumination elements to use to capture the image information based at least in part on a type of the gesture detection to be performed and the determined state of the at least one environmental condition;capturing image information using the minimum number of cameras and the minimum number of illumination elements;analyzing the captured image information to attempt to recognize a gesture performed by a user,adjusting at least one of the minimum number of cameras, the selection of cameras, a functional state of the cameras, or the minimum number of illumination elements used to capture subsequent image information when no gesture is recognized from the captured image information for a determined period of time;determining a gesture from the captured subsequent image information, the gesture corresponding to a type of input for the computing device;determining that the performance of the gesture detection has been sufficient to detect at least one gesture over a predetermined period of time;determining whether using a fewer number of cameras or a fewer number of illumination elements would detect at least one gesture over the predetermined period of time; andreducing, based at least in part on determining that using the fewer number of cameras or the fewer number of illumination elements would detect at least one gesture, a current number of cameras or a current number of illumination elements used to perform the gesture detection, the current number of cameras, after the reducing, including at least one camera used to perform the gesture detection. 2. The computer-implemented method of claim 1, further comprising: determining, after analyzing the captured image information to attempt to recognize the gesture performed by the user, that no gesture is recognized. 3. The computer-implemented method of claim 1, wherein selecting the minimum number of cameras further comprises detecting a right hand or left hand of the user visible by at least one of the cameras. 4. The computer-implemented method of claim 1, further comprising: determining the minimum number of illumination elements based at least in part on the type of the gesture recognition to be performed and the state of the at least one environmental condition. 5. A computer-implemented method of selecting resources to perform a task on a computing device, comprising: selecting, using the at least one processor of the computing device, a number of components on the computing device to use to capture image information for a task to be performed on the computing device, the number of components including a camera;attempting to perform the task using at least the image information captured using the selected number of components;monitoring performance of at least a portion of the task;adjusting, using the at least one processor of the computing device, at least one of the number of components or a selection of components used to capture image information based at least in part on the monitored performance,wherein the number of components are selected to utilize an amount of resources on the computing device that is adjusted for current environmental conditions and sufficient to enable the task to be performed to at least a determined level of performance;determining that the task has been performed at the determined level of performance for at least a predetermined period of time;determining whether using a fewer number of components would perform the task to the determined level of performance; andreducing, based at least in part on determining that the fewer number of components would perform the task, a current number of components used to perform the task to the determined level of performance, the current number of components, after the reducing, including at least one camera used to perform the task. 6. The computer-implemented method of claim 5, wherein the number of components on the computing device are further selected to analyze the captured image information. 7. The computer-implemented method of claim 6, wherein the type of components includes at least one of a camera, an illumination source, an electronic gyroscope, an inertial sensor, an electronic compass, a pressure sensor, a light sensor, a processor, or an algorithm. 8. The computer-implemented method of claim 7, wherein the algorithm includes at least one of a machine learning algorithm, an edge histogram analysis algorithm, an optimization algorithm, or an adaptive control algorithm. 9. The computer-implemented method of claim 5, wherein the at least one environmental condition includes at least one of an amount of ambient light, a confidence in a recognition algorithm, a distance to an object, an amount of motion, a confidence in a result, or an amount of remaining battery life. 10. The computer-implemented method of claim 5, wherein adjusting at least one of the number of components or the type of components used to capture image information includes increasing the number of components or adjusting a functional aspect of at least one of the components when the task is not performed to at least the determined level of performance. 11. The computer-implemented method of claim 5, wherein adjusting at least one of the number or type of components used to capture image information includes decreasing the number of components or adjusting a functional aspect of at least one of the components when the task is predicted to be able to be performed to at least the determined level of performance after the adjusting. 12. The computer-implemented method of claim 11, wherein the capability of at least one of the components includes at least one of a resolution of a camera, a sensitivity of a sensor, or an intensity of an illumination element. 13. The computer-implemented method of claim 5, wherein the type of task includes at least one of gesture recognition, facial recognition, head tracking, gaze tracking, and object tracking. 14. The computer-implemented method of claim 5, wherein at least the portion of the task includes at least one of image capture, pattern matching, gaze direction determination, head location determination, object identification, facial identification, motion detection, or gesture recognition. 15. The computer-implemented method of claim 5, wherein the selected number and type of components is based at least in part upon historical performance data for a user of the computing device. 16. The computer-implemented method of claim 5, further comprising: predicting a change in at least one of an environmental condition or a position of a user with respect to the computing device; andadjusting at least one of the number, the type, or the state of at least one component on the computing device in response to the predicted change. 17. A computing device, comprising: a device processor;at least one camera;at least one light source; anda memory device including instructions operable to be executed by the processor to perform a set of actions, enabling the computing device to: select a number of components on the computing device to use to capture the image information for a task to be performed on the computing device, the number of components including the at least one camera;analyze the captured image information to attempt to complete at least a portion of the task;adjust at least one of the number of components or a functional state of one of the components used to capture image information when at least a portion of the task is unable to be completed to a determined level of performance,wherein the number of components are selected to utilize an amount of resources on the computing device that is adjusted for current environmental conditions; determine that the task has been performed to the determined level of performance for at least a predetermined period of time;determine whether using a fewer number of components would perform the task to the determined level of performance; andreduce, based at least in part on determining that using the fewer number of components would perform the task, a current number of components used to perform the task to the determined level of performance, the current number of components, after being reduced, including at least one camera used to perform the task. 18. The computing device of claim 17, wherein the instructions when executed further cause the computing device to: select a number of components on the computing device to use to analyze the captured image information. 19. The computing device of claim 17, wherein the number of components includes at least one of a camera, an illumination source, an electronic gyroscope, an inertial sensor, an electronic compass, a pressure sensor, a light sensor, a processor, or an algorithm. 20. The computing device of claim 17, wherein the instructions when executed further cause the computing device to: determine a hand of a user visible to perform a gesture, the selected number of components on the computing device to use to capture the image information being based at least in part upon the determined visible hand. 21. A non-transitory computer-readable storage medium storing instructions that, when executed by a processor, cause a computing device to: select a number of components on the computing device to use to capture image information for a task, the number of components including a camera;analyze the captured image information according to a type of the task to be performed;adjust at least one of the number of components or a functional state of one of the components used to capture or analyze the image information when the task is unable to be performed to at least a minimum level of performance,wherein the number of components is selected to utilize an amount of resources on the computing device that is adjusted for current environmental conditions;determine that the task has been performed to the minimum level of performance for at least a predetermined period of time;determine whether using a fewer number of components would perform the task to the minimum level of performance; andreduce, based at least in part on determining that the fewer number of components would perform the task, a current number of components used to perform the task to the minimum level of performance, the current number of components, after being reduced, including at least one camera used to perform the task. 22. The non-transitory computer-readable storage medium of claim 21, wherein the instructions when executed further cause the computing device to: select at least one component on the computing device to use to analyze the captured image information. 23. The non-transitory computer-readable storage medium of claim 21, wherein the task includes at least one of pattern matching, object identification, facial identification, motion detection, gaze direction determination, head location determination, or gesture recognition. 24. The non-transitory computer-readable storage medium of claim 21, wherein the instructions when executed further cause the computing device to: predict a change in at least one of an environmental condition or a position of a user; andadjust at least one of the number of components or a functional state of at least one component on the computing device in response to the predicted change. 25. The non-transitory computer-readable storage medium of claim 21, wherein the instructions when executed further cause the computing device to: monitor a power level of a battery of the computing device; andadjust at least one of the number of components or a functional state of at least one component on the computing device when the power level drops below a determined value. 26. The computer-implemented method of claim 1, wherein the selection of the minimum number of cameras is based at least in part upon a determination of a right or left hand of the user holding the computing device.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (40)
Birchfield,Stanley T.; Gillmor,Daniel K., Acoustic source localization system and method.
Aaron, Joseph D.; Brunet, Peter Thomas; Kjeldsen, Frederik C. M.; Luther, Paul S.; Mahaffey, Robert Bruce, Method and apparatus for providing visual feedback of speed production.
Maes Pattie E. (Somerville MA) Blumberg Bruce M. (Pepperell MA) Darrell Trevor J. (Cambridge MA) Starner Thad E. (Somerville MA) Johnson Michael P. (Cambridge MA) Russell Kenneth B. (Boston MA) Pentl, Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual e.
Okuda, Nobuya; Kobayashi, Tatsuya; Fujimoto, Hirofumi; Matsuyama, Shigenobu, Method for controlling movement of viewing point of simulated camera in 3D video game, and 3D video game machine.
Gould Joel M. ; Steele Elizabeth E. ; McGrath Frank J. ; Squires Steven D. ; Parke Joel W., Method of speech command recognition with dynamic assignment of probabilities according to the state of the controlled a.
Basu, Sankar; de Cuetos, Philippe Christian; Maes, Stephane Herman; Neti, Chalapathy Venkata; Senior, Andrew William, Methods and apparatus for audio-visual speech detection and recognition.
Stork David G. (Stanford CA) Wolff Gregory J. (Mountain View CA), Neural network acoustic and visual speech recognition system training method and apparatus.
Dehlin, Joel P.; Chen, Christina Summer; Wilson, Andrew D.; Robbins, Daniel C.; Horvitz, Eric J.; Hinckley, Kenneth P.; Wobbrock, Jacob O., Recognizing gestures and using gestures for interacting with software applications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.