System and method for controlling a camera based on processing an image captured by other camera
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-005/232
H04N-019/137
H04N-019/597
G06K-009/00
H04N-005/235
H04N-005/238
출원번호
US-0028852
(2015-04-19)
등록번호
US-9661215
(2017-05-23)
국제출원번호
PCT/IL2015/050413
(2015-04-19)
국제공개번호
WO2015/162605
(2015-10-29)
발명자
/ 주소
Sivan, Ishay
출원인 / 주소
SNAPAID LTD.
대리인 / 주소
May Patents Ltd. c/o Dorit Shem-Tov
인용정보
피인용 횟수 :
5인용 특허 :
87
초록▼
A device comprises a first digital camera having a first center line of sight and a second digital camera having a second center line of sight that is parallel and opposing the first camera. A method for controlling the first camera based on estimating the angular deviation between a person gaze dir
A device comprises a first digital camera having a first center line of sight and a second digital camera having a second center line of sight that is parallel and opposing the first camera. A method for controlling the first camera based on estimating the angular deviation between a person gaze direction and the line of sight of the first digital camera. A human face is detected in an image captured as an image file by the second digital camera, using a face detection algorithm. An angular deviation α is estimated, defined between the second center line of sight and an imaginary line from the second camera to the detected human face based on the captured image, and an angular deviation β is estimated, defined between the imaginary line from the second camera to the detected face and the human face gaze direction based on the captured image.
대표청구항▼
1. A method for controlling a first camera by estimating the angular deviation between a person gaze direction and a digital camera line of sight, for use with a device including a first digital camera having a first center line of sight and a second digital camera having a second center line of sig
1. A method for controlling a first camera by estimating the angular deviation between a person gaze direction and a digital camera line of sight, for use with a device including a first digital camera having a first center line of sight and a second digital camera having a second center line of sight that is parallel and opposing the first center line of sight, the method comprising the steps of: capturing an image to an image file by the second digital camera;detecting a human face in the image by using a face detection algorithm;estimating an angular deviation α between the second center line of sight and an imaginary line from the second camera to the detected human face based on the captured image;estimating an angular deviation β between the imaginary line from the second camera to the detected face and the human face gaze direction based on the captured image;estimating an angular deviation φ between the first center line of sight and the human face gaze direction based on the estimated angular deviation α and the estimated angular deviation β; andinitiating, controlling, stopping, or inhibiting an action is response to the value of the estimated angular deviation φ. 2. The method according to claim 1 wherein the step of estimating of the angular deviation α includes estimating a horizontal angular deviation between the second horizontal center line of sight and the horizontally detected human face; and wherein the step of estimating of the angular deviation β, includes estimating the horizontal angular deviation between the horizontal line of sight to the detected face and the horizontal human face gaze direction. 3. The method according to claim 1 wherein the step of estimating of the angular deviation α includes estimating an vertical angular deviation between the second vertical center line of sight and the horizontally detected human face; and wherein the step of estimating of the angular deviation β, includes estimating the vertical angular deviation between the vertical line of sight to the detected face and the vertical human face gaze direction. 4. A non-transitory tangible computer readable storage media comprising code to perform the steps of the method of claim 1. 5. The device housed in a single enclosure and comprising in the single enclosure the first and second digital cameras, a memory for storing computer executable instructions, and a processor for executing the instructions, the processor configured by the memory to perform acts comprising the method of claim 1. 6. The device according to claim 5 wherein the single enclosure is a portable or a hand-held enclosure and the device is battery-operated. 7. The device according to claim 5 wherein the device is a notebook, a laptop computer, a media player, a cellular phone, a Personal Digital Assistant (PDA), or an image processing device. 8. The method according to claim 1 wherein the angular deviation α is estimated to be a set value. 9. The method according to claim 8 wherein the set value is 30°, 45°, or 60°. 10. The method according to claim 1 wherein the angular deviation α is estimated based on the detected face location in the captured image. 11. The method according to claim 10 wherein the angular deviation α is estimated based on the detected face location deviation from the center of the image. 12. The method according to claim 11 wherein the angular deviation α is calculated based on the equation α=γ*(DEV/HD), wherein DEV is the detected face location horizontal deviation from the center of the image, HD is the total horizontal distance of the captured image, and γ is the horizontal angular field of view of the second camera. 13. The method according to claim 11 wherein the angular deviation α is calculated based on the equation α=γ*(DEV/HD), wherein DEV is the detected face location vertical deviation from the center of the image, HD is the total vertical distance of the captured image, and γ is the vertical angular field of view of the second camera. 14. The method according to claim 1 wherein the step of estimating the angular deviation β, is based on applying a human gaze direction estimation algorithm on the captured image. 15. The method according to claim 14 wherein the human gaze direction estimation algorithm is based on, or using, eye detection, or eye tracking. 16. The method according to claim 14 wherein the human gaze direction estimation algorithm is based on, or using, head pose detection. 17. The method according to claim 16 wherein the human gaze direction estimation algorithm is further based on, or using, eye detection, or eye tracking. 18. The method according to claim 14 wherein the human gaze direction estimation algorithm is further based on, or using, facial landmarks detection. 19. The method according to claim 14 wherein the human gaze direction estimation algorithm is further based on, or using, detection of one or more human face parts. 20. The method according to claim 19 wherein the human face parts include nose, right nostril, left nostril, right cheek, left cheek, right eye, left eye, right ear, or left ear. 21. The method according to claim 19 wherein the angular deviation β, is estimated based on the detected human face parts. 22. The method according to claim 21 wherein in response to detecting both a right ear and a left ear, the angular deviation β, is estimated to be 0°. 23. The method according to claim 21 wherein in response to detecting a right ear, a left eye, and a right eye, and not detecting a left ear, the angular deviation β, is estimated to be 30°. 24. The method according to claim 21 wherein in response to detecting only a right ear and a right eye, and not detecting a left ear and a left eye, the angular deviation β, is estimated to be 90°. 25. The method according to claim 14 wherein the device further comprises a third digital camera having a third center line of sight that is parallel and opposing to the first center line of sight, the method further comprising the steps of: capturing an additional image to an additional image file by the third digital camera; andforming a 3D representation of the detected human face by combining the captured image and the additional captured image; andwherein the step of estimating the angular deviation β, includes analyzing the formed 3D human face representation. 26. The method according to claim 1 wherein the image file is in a format that is according to, based on, or consists of Portable Network Graphics (PNG), Graphics Interchange Format (GIF), Joint Photographic Experts Group (JPEG), Windows bitmap (BMP), Exchangeable image file format (Exif), Tagged Image File Format (TIFF), or Raw Image Formats (RIF). 27. The method according to claim 1 wherein the angular deviation φ is calculated as φ=β−α. 28. The method according to claim 1 wherein the action comprises controlling the first camera. 29. The method according to claim 28 wherein the controlling of the first digital camera includes changing a setting of the first digital camera. 30. The method according to claim 28 wherein the controlling of the first digital camera includes saving an image captured by the first camera in a memory. 31. The method according to claim 1 for use with a maximum threshold value or a minimum threshold value, and wherein the method further comprising the step of respectively comparing the angular deviation φ to the maximum or minimum threshold. 32. The method according to claim 31 further comprising the step of respectively initiating, stopping, controlling, or inhibiting the action if the value of the angular deviation φ is higher than the maximum value or is lower than the minimum value. 33. The method according to claim 32 further comprising the step of respectively initiating, stopping, controlling, or inhibiting the action if the value of the angular deviation φ is higher than the maximum value or is lower than the minimum value for a time interval. 34. The method according to claim 33 wherein the time interval is 0.5, 1, 2, or 3 seconds. 35. The method according to claim 1 wherein the device further includes a visual annunciator comprising a visual signaling component, and wherein the taking an action comprises activating or controlling the visual annunciator. 36. The method according to claim 35 wherein the visual signaling component is a visible light emitter. 37. The method according to claim 36 wherein the visible light emitter is a semiconductor device, an incandescent lamp or fluorescent lamp. 38. The method according to claim 36 wherein the visible light emitter is adapted for a steady illumination and for blinking in response to the value of the estimated angular deviation φ. 39. The method according to claim 36 wherein the illumination level of the visible light emitter is in response to the value of the estimated angular deviation. 40. The method according to claim 36 wherein the visible light emitter location, type, color, or steadiness is in response to the value of the estimated angular deviation φ. 41. The method according to claim 36 wherein the visible light emitter is a numerical or an alphanumerical display for displaying a value corresponding to the value of the estimated angular deviation. 42. The method according to claim 41 wherein the visible light emitter is based on one out of LCD (Liquid Crystal Display), TFT (Thin-Film Transistor), FED (Field Emission Display) or CRT (Cathode Ray Tube). 43. The method according to claim 1 wherein the device further includes an audible annunciator comprising an audible signaling component for emitting a sound, and wherein the taking an action comprises activating or controlling the audible annunciator. 44. The method according to claim 43 wherein the audible signaling component comprising electromechanical or piezoelectric sounder. 45. The method according to claim 44 wherein the audible signaling component comprising a buzzer, a chime or a ringer. 46. The method according to claim 43 wherein the audible signaling component comprising a loudspeaker and the device further comprising a digital to analog converter coupled to the loudspeaker. 47. The method according to claim 43 wherein the audible signaling component is operative to generate a single or multiple tones. 48. The method according to claim 43 wherein the sound emitted from the audible signaling component is in response to the value of the estimated angular deviation cp. 49. The method according to claim 48 wherein the volume, type, steadiness, pitch, rhythm, dynamics, timbre, or texture of the sound emitted from the audible signaling component is in response to the value of the estimated angular deviation φ. 50. The method according to claim 43 wherein the sound emitted from the audible signaling component is a human voice talking. 51. The method according to claim 50 wherein the sound is a syllable, a word, a phrase, a sentence, a short story or a long story in response to the value of the estimated angular deviation φ. 52. A device for controlling a camera operation based on image processing of an image captured by another camera, the device comprising: a first digital camera having a first center line of sight;a second digital camera having a second center line of sight that is parallel and opposing to the first center line of sight, and configured to capturing an image to an image file;a memory for storing computer executable instructions and for storing the image file;a processor for executing the instructions, the processor coupled to the memory and to the first and second digital cameras, and configured by the memory for detecting a human face in the image by using a face detection algorithm and for estimating an angular deviation φ between the first center line of sight and a human face gaze direction based on an estimated angular deviation α and an estimated angular deviation β;a control port coupled to the processor for outputting a control signal in response to the value of the estimated angular deviation φ; anda single portable or hand-held enclosure housing the first and second digital cameras, the memory, the processor, and the control port,wherein the angular deviation α is defined between the second center line of sight and an imaginary line from the second camera to the detected human face, and the angular deviation β is defined between the imaginary line from the second camera to the detected face and the human face gaze direction. 53. The device according to claim 52 wherein the angular deviation α is defined between the second horizontal center line of sight and a horizontal imaginary line from the second camera to the detected human face, and the angular deviation β, is defined between the horizontal imaginary line from the second camera to the detected face and the horizontal human face gaze direction. 54. The device according to claim 52 wherein the angular deviation α is defined between the second vertical center line of sight and a vertical imaginary line from the second camera to the detected human face, and the angular deviation β, is defined between the vertical imaginary line from the second camera to the detected face and the vertical human face gaze direction. 55. The device according to claim 52 wherein the device is a notebook, a laptop computer, a media player, a cellular phone, a Personal Digital Assistant (PDA), or an image processing device. 56. The device according to claim 52 wherein the angular deviation α is estimated by the processor to be a set value. 57. The device according to claim 56 wherein the set value is 30°, 45°, or 60°. 58. The device according to claim 52 wherein the angular deviation α is estimated by the processor based on the detected face location in the captured image. 59. The device according to claim 58 wherein the angular deviation α is estimated by the processor based on the detected face location deviation from the center of the image. 60. The device according to claim 59 wherein the angular deviation α is calculated by the processor based on the equation α=γ*(DEV/HD), wherein DEV is the detected face location horizontal deviation from the center of the image, HD is the total horizontal distance of the captured image, and γ is the horizontal angular field of view of the second camera. 61. The device according to claim 59 wherein the angular deviation α is calculated by the processor based on the equation α=γ*(DEV/HD), wherein DEV is the detected face location vertical deviation from the center of the image, HD is the total vertical distance of the captured image, and γ is the vertical angular field of view of the second camera. 62. The device according to claim 52 wherein the angular deviation β, by the processor is estimated based on applying a human gaze direction estimation algorithm on the captured image. 63. The device according to claim 62 wherein the human gaze direction estimation algorithm is based on, or using, eye detection or eye tracking. 64. The device according to claim 62 wherein the human gaze direction estimation algorithm is based on, or using, head pose detection. 65. The device according to claim 64 wherein the human gaze direction estimation algorithm is further based on, or using, eye detection or eye tracking. 66. The device according to claim 62 wherein the human gaze direction estimation algorithm is further based on, or using, facial landmarks detection. 67. The device according to claim 62 wherein the human gaze direction estimation algorithm is further based on, or using, detection of one or more human face parts. 68. The device according to claim 67 wherein the human face parts include nose, right nostril, left nostril, right cheek, left cheek, right eye, left eye, right ear, or left ear. 69. The device according to claim 67 wherein the angular deviation β is estimated based on the detected human face parts. 70. The device according to claim 69 wherein in response to detecting both a right ear and a left ear, the angular deviation β is estimated to be 0°. 71. The device according to claim 69 wherein in response to detecting a right ear, a left eye, and a right eye, and not detecting a left ear, the angular deviation β is estimated to be 30°. 72. The device according to claim 69 wherein in response to detecting only a right ear and a right eye, and not detecting a left ear and a left eye, the angular deviation β is estimated to be 90°. 73. The device according to claim 62 further comprising a third digital camera coupled to the processor for capturing an additional image to an additional image file, the third digital camera having a third center line of sight that is parallel and opposing to the first center line of sight, and wherein estimating the angular deviation β is estimated by the processor based on analyzing a 3D human face representation that is formed by combining the captured image and the additional captured image. 74. The device according to claim 52 wherein the image file is in a format that is according to, based on, or consists of Portable Network Graphics (PNG), Graphics Interchange Format (GIF), Joint Photographic Experts Group (JPEG), Windows bitmap (BMP), Exchangeable image file format (Exif), Tagged Image File Format (TIFF), or Raw Image Formats (RIF). 75. The device according to claim 52 wherein the angular deviation φ is calculated as φ=β−α. 76. The device according to claim 52 wherein the control port is coupled to control the first digital camera. 77. The device according to claim 76 wherein the control port is (Original) coupled to control the first digital camera by changing a setting of the first digital camera. 78. The device according to claim 52 for use with a maximum threshold value or a minimum threshold value, and wherein the control signal is produced as a response to comparing the angular deviation φ to the maximum or minimum threshold. 79. The device according to claim 78 wherein the control signal is produced in response to the value of the angular deviation φ being higher than the maximum value or being lower than the minimum value. 80. The device according to claim 52 further comprising a visual annunciator comprising a visual signaling component coupled to the control port for activating or controlling the visual annunciator. 81. The device according to claim 80 wherein the visual signaling component is a visible light emitter. 82. The device according to claim 81 wherein the visible light emitter is a semiconductor device, an incandescent lamp or fluorescent lamp. 83. The device according to claim 81 wherein the visible light emitter is adapted for a steady illumination and for blinking in response to the value of the estimated angular deviation φ. 84. The device according to claim 81 wherein the illumination level of the visible light emitter is in response to the value of the estimated angular deviation. 85. The device according to claim 81 wherein the visible light emitter location, type, color, or steadiness is in response to the value of the estimated angular deviation cp. 86. The device according to claim 81 wherein the visible light emitter is a numerical or an alphanumerical display for displaying a value corresponding to the value of the estimated angular deviation. 87. The device according to claim 86 wherein the visible light emitter is based on one out of LCD (Liquid Crystal Display), TFT (Thin-Film Transistor), FED (Field Emission Display) or CRT (Cathode Ray Tube). 88. The device according to claim 52 further comprising an audible annunciator comprising an audible signaling component for emitting a sound coupled to the control port for activating or controlling the audible annunciator. 89. The device according to claim 88 wherein the audible signaling component comprising electromechanical or piezoelectric sounder. 90. The device according to claim 89 wherein the audible signaling component comprising a buzzer, a chime or a ringer. 91. The device according to claim 88 wherein the audible signaling component comprising a loudspeaker and the device further comprising a digital to analog converter coupled to the loudspeaker. 92. The device according to claim 88 wherein the audible signaling component is operative to generate a single or multiple tones. 93. The device according to claim 88 wherein the sound emitted from the audible signaling component is in response to the value of the estimated angular deviation φ. 94. The device according to claim 93 wherein the volume, type, steadiness, pitch, rhythm, dynamics, timbre, or texture of the sound emitted from the audible signaling component is in response to the value of the estimated angular deviation φ. 95. The device according to claim 88 wherein the sound emitted from the audible signaling component is a human voice talking. 96. The device according to claim 95 wherein the sound is a syllable, a word, a phrase, a sentence, a short story or a long story in response to the value of the estimated angular deviation φ.
Buhr John D. ; Goodwin Robert M. ; Koeng Frederick R. ; Rivera Jose E., Digital photofinishing system including scene balance, contrast normalization, and image sharpening digital image processing.
Roberts Marc K. (Burke VA) Chikosky Matthew A. (Springfield VA) Speasl Jerry A. (Vienna VA), Electronic still video camera with direct personal computer (PC) compatible digital format output.
Steffens Johannes Bernhard ; Elagin Egor Valerievich ; Nocera Luciano Pasquale Agostino ; Maurer Thomas ; Neven Hartmut, Face recognition from video images.
Parulski Kenneth A. (Rochester NY) Hamel Robert H. (Walworth NY) Acello John J. (East Rochester NY), Hand-manipulated electronic camera tethered to a personal computer.
DeBan Abdou F. (Dayton OH) Xu Tianning (Dayton OH) Tumey David M. (Huber Heights OH) Arndt Craig M. (Dayton OH), Identification and verification system.
Steinberg, Eran; Corcoran, Peter; Prilutsky, Yury, Method and apparatus for initiating subsequent exposures based on determination of motion blurring artifacts.
Zhu, Youding; Musick, Jr., Charles; Kay, Robert; Powers, III, William Robert; Wilkinson, Dana; Reynolds, Stuart, Method and system for head tracking and pose estimation.
Mashimo Yukio (Tokyo JA) Sakurada Nobuaki (Kanagawa JA) Ito Tadashi (Kanagawa JA) Ito Fumio (Kanagawa JA) Shinoda Nobuhiko (Tokyo JA), System for exposure measurement and/or focus detection by means of image senser.
Mashimo Yukio (Tokyo JPX) Sakurada Nobuaki (Kanagawa JPX) Ito Tadashi (Kanagawa JPX) Ito Fumio (Kanagawa JPX) Shinoda Nobuhiko (Tokyo JPX), System for exposure measurement and/or focus detection by means of image sensor.
Kojima Kazuaki (Nagaokakyo JPX) Kuno Tetsuya (Nagaokakyo JPX) Sugiura Hiroaki (Nagaokakyo JPX) Yamada Takeshi (Nagaokakyo JPX), Video signal processor for detecting flesh tones in am image.
Medasani, Swarup; Meltzer, Jason; Xu, Jiejun; Chen, Zhichao; Sundareswara, Rashmi N.; Payton, David W.; Uhlenbrock, Ryan M.; Barajas, Leandro G.; Kim, Kyungnam, Method for object localization and pose estimation for an object of interest.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.