Wearable smart device for hazard detection and warning based on image and audio data
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-005/232
G08B-021/02
G01C-021/36
출원번호
US-0601506
(2015-01-21)
등록번호
US-9576460
(2017-02-21)
발명자
/ 주소
Dayal, Rajiv
출원인 / 주소
TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
대리인 / 주소
Snell & Wilmer LLP
인용정보
피인용 횟수 :
0인용 특허 :
158
초록▼
A wearable smart device for providing hazard warning information to a user. The wearable smart device includes a microphone configured to detect audio data associated with a potential hazard. The wearable smart device also includes a camera configured to detect image data associated with the potenti
A wearable smart device for providing hazard warning information to a user. The wearable smart device includes a microphone configured to detect audio data associated with a potential hazard. The wearable smart device also includes a camera configured to detect image data associated with the potential hazard. The wearable smart device also includes a processor coupled to the microphone and the camera and configured to determine whether the potential hazard presents a real hazard based on the detected audio data and the detected image data.
대표청구항▼
1. A wearable computing device for providing hazard warning information comprising: a microphone configured to detect audio data associated with a potential hazard;a camera configured to detect image data associated with the potential hazard; anda processor coupled to the microphone and the camera a
1. A wearable computing device for providing hazard warning information comprising: a microphone configured to detect audio data associated with a potential hazard;a camera configured to detect image data associated with the potential hazard; anda processor coupled to the microphone and the camera and configured to determine which of the detected audio data or the detected image data has a higher quality, and to determine whether the potential hazard presents a real hazard to a user based on the detected audio data and the detected image data such that more weight is provided to whichever of the detected audio data or the detected image data has the higher quality. 2. The wearable computing device of claim 1 further comprising a memory coupled to the processor and configured to store memory audio data and memory image data associated with at least one hazard, and wherein the processor is configured to determine whether the potential hazard presents the real hazard by comparing at least one of the detected audio data or the detected image data to at least one of the memory audio data or the memory image data. 3. The wearable computing device of claim 1 further comprising an inertial measurement unit (IMU) coupled to the processor and configured to detect motion data and a GPS sensor configured to detect location data and wherein the processor is further configured to determine a current location of the wearable computing device based on at least two of the detected motion data, the detected location data, the detected audio data, and the detected image data. 4. The wearable computing device of claim 3 further comprising a memory coupled to the processor and configured to store a stored location and at least one hazard associated with the stored location, and wherein the processor is further configured to determine whether the at least one hazard presents a danger to the user based on a comparison of the current location to the stored location. 5. The wearable computing device of claim 1 wherein the microphone includes two microphones spaced apart and the processor is further configured to determine a direction of the potential hazard based on the detected audio data. 6. The wearable computing device of claim 5 wherein the processor is further configured to instruct the camera to focus towards the direction of the potential hazard in response to determining the direction of the potential hazard. 7. The wearable computing device of claim 1 wherein the processor is further configured to determine a distance to the potential hazard based on a volume of the detected audio data. 8. The wearable computing device of claim 1 wherein the processor is further configured to determine a severity level of the potential hazard based on at least one factor. 9. The wearable computing device of claim 8 wherein the at least one factor includes a distance to the potential hazard, whether a distance between the potential hazard and the user is decreasing, a size of the potential hazard, or an audio volume of the potential hazard. 10. The wearable computing device of claim 1 wherein the higher quality corresponds to at least one of a lack of obstruction of the detected audio data or the detected image data, an accuracy of the detected audio data or the detected image data, or a match of the detected audio data or the detected image data to data stored in a memory. 11. A wearable computing device for providing hazard warnings comprising: an upper portion having a first end and a second end;a first lower portion coupled to the first end of the upper portion;a second lower portion coupled to the second end of the upper portion;at least one microphone positioned on the first lower portion, the second lower portion or the upper portion and configured to detect audio data associated with a potential hazard;a camera positioned on the first lower portion or the second lower portion and configured to detect image data associated with the potential hazard; anda processor coupled to the camera and the at least one microphone and configured to determine which of the detected audio data or the detected image data has a higher quality, and to determine whether the potential hazard presents a real hazard to a user by analyzing whichever of the detected audio data or the detected image data has the higher quality. 12. The wearable computing device of claim 11 further comprising a memory coupled to the processor and configured to store memory audio data and memory image data associated with at least one hazard and wherein the processor is configured to determine whether the potential hazard presents the real hazard by comparing at least one of the detected audio data or the detected image data to at least one of the memory audio data or the memory image data. 13. The wearable computing device of claim 11 wherein the at least one microphone includes two microphones and the processor is further configured to determine a direction of the potential hazard based on the detected audio data. 14. The wearable computing device of claim 13 wherein the processor is further configured to cause the camera to focus towards the direction of the potential hazard in response to determining the direction of the potential hazard. 15. The wearable computing device of claim 11 wherein the processor is further configured to determine a distance to the potential hazard based on a volume of the detected audio data. 16. The wearable computing device of claim 11 wherein the processor is further configured to determine a severity level of the potential hazard based on at least one of a distance to the potential hazard, whether the potential hazard is approaching the user, a size of the potential hazard or an audio volume of the potential hazard. 17. The wearable computing device of claim 11 wherein the higher quality corresponds to at least one of a lack of obstruction of the detected audio data or the detected image data, an accuracy of the detected audio data or the detected image data, whether the detected audio data or the detected image data fails to indicate that the potential hazard is present, or a match of the detected audio data or the detected image data to data stored in a memory. 18. A method for providing hazard warnings to a user of a wearable computing device comprising: detecting, by at least two microphones, audio data associated with a potential hazard and including volume information;detecting, by a camera, image data associated with the potential hazard;determining, by a processor, whether the detected audio data or the detected image data has a higher quality; anddetermining, by the processor, whether the potential hazard presents a real hazard to the user based on the detected audio data and the detected image data by providing more weight to whichever of the detected audio data or the detected image data has the higher quality. 19. The method of claim 18 wherein the higher quality corresponds to at least one of a lack of obstruction of the detected audio data or the detected image data, an accuracy of the detected audio data or the detected image data, whether the detected audio data or the detected image data fails to indicate that the potential hazard is present, or a match of the detected audio data or the detected image data to data stored in a memory. 20. The method of claim 18 further comprising storing, in a memory, stored audio data and stored image data associated with at least one hazard, and wherein determining whether the potential hazard presents the real hazard is based on a comparison of at least one of the detected audio data or the detected image data to at least one of the stored audio data or the stored image data. 21. A method for providing hazard warnings to a user of a wearable smart device comprising: storing, in a memory, hazard data;detecting, by a microphone, audio data associated with a potential hazard;detecting, by a camera, image data associated with the potential hazard;determining, by a processor,which of the detected audio data or the detected image data has a higher quality; anddetermining, by the processor, whether the potential hazard presents a real hazard to the user based on which of the detected audio data or the detected image data has the higher quality. 22. The method of claim 21 further comprising generating, by a pair of vibration units, haptic feedback indicating a presence of the potential hazard in response to determining that the potential hazard presents the real hazard to the user.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (158)
Chao, Hui; Das, Saumitra Mohan; Gupta, Rajarshi; Khorashadi, Behrooz; Sridhara, Vinay; Pakzad, Payam, Adaptive updating of indoor navigation assistance data for use by a mobile device.
Lynt Ingrid H. (7502 Toll Ct. Alexandria VA 22306) Lynt Christopher H. (7502 Toll Ct. Alexandria VA 22306), Apparatus for converting visual images into tactile representations for use by a person who is visually impaired.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Assisting a vision-impaired user with navigation based on a 3D captured image stream.
Janardhanan, Jayawardan; Dutta, Goutam; Tripuraneni, Varun, Attitude estimation for pedestrian navigation using low cost mems accelerometer in mobile applications, and processing methods, apparatus and systems.
Sawan,Mohamad; Harvey,Jean Fran챌ois; Roy,Martin; Coulombe,Jonathan; Savaria,Yvon; Donfack,Colince, Body electronic implant and artificial vision system thereof.
Kramer James P. (Stanford CA) Lindener Peter (E. Palo Alto CA) George William R. (Palo Alto CA), Communication system for deaf, deaf-blind, or non-vocal individuals using instrumented glove.
Kurzweil, Raymond C.; Albrecht, Paul; Gashel, James; Gibson, Lucy; Lvovsky, Lev, Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine.
Strub, Henry B.; Burgess, David A.; Johnson, Kimberly H.; Cohen, Jonathan R.; Reed, David P., Hybrid recording unit including portable video recorder and auxillary device.
Hirsch Hermann (Hirschstrasse 5 A-9021 Klagenfurt (Karnten) ATX) Pichler Heinrich (Sailerackergasse 38/2 A-1190 Wien (Osterreich) ATX), Information system.
Wellner Pierre D.,GBX ; Flynn Michael J.,GBX ; Carter Kathleen A.,GBX ; Newman William M.,GBX, Interactive desktop display system for automatically adjusting pan and zoom functions in response to user adjustment of a feedback image.
Stanford Thomas H. (Escondido CA) Sahne Farhad Noroozi (San Diego CA) Riches Thomas P. (Temecula CA) O\Neill Robert (San Diego CA), Neck engageable transducer support assembly and method of using same.
Jung,Kyung Kwon; Chae,Yeon Sik; Rhee,Jin Koo, Object identification system combined with millimeter-wave passive image system and global positioning system (GPS) for the blind.
Holakovszky Lszl (Beregszsz u.4o/I. Budapest ; 1112 HUX) Endrei Kroly (Fehryri t 86. Budapest 1119 HUX) Kezi Lszl (Zugligeti t 69. Budapest 1121 HUX) Endrei Krolyn (Trogatt 55. Budapest 1021 HUX), Stereoscopic video image display appliance wearable on head like spectacles.
Dieberger, Andreas, System and method for non-visually presenting multi-part information pages using a combination of sonifications and tactile feedback.
Naick, Indran; Spinac, Clifford J.; Sze, Calvin L., Using a display associated with an imaging device to provide instructions to the subjects being recorded.
Lipton Lenny (San Rafael CA) Halnon Jeffrey J. (Richmond CA) Mitchell Larry H. (Cupertino CA) Hursey Robert (Carmel Valley CA), Wireless active eyewear for stereoscopic applications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.