Wearable earpiece for providing social and environmental awareness
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G01C-021/16
G06K-009/00
H04R-001/10
G01C-021/20
G01C-021/36
출원번호
US-0450175
(2014-08-01)
등록번호
US-10024667
(2018-07-17)
발명자
/ 주소
Moore, Douglas A.
Djugash, Joseph M. A.
Ota, Yasuhiro
출원인 / 주소
TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
대리인 / 주소
Snell & Wilmer LLP
인용정보
피인용 횟수 :
0인용 특허 :
161
초록▼
An intelligent earpiece to be worn over an ear of a user is described. The earpiece includes a processor connected to the IMU, the GPS unit and the at least one camera. The processor can recognize an object in the surrounding environment by analyzing the image data based on the stored object data an
An intelligent earpiece to be worn over an ear of a user is described. The earpiece includes a processor connected to the IMU, the GPS unit and the at least one camera. The processor can recognize an object in the surrounding environment by analyzing the image data based on the stored object data and at least one of the inertial measurement data or the location data. The processor can determine a desirable event or action based on the recognized object, the previously determined user data, and a current time or day. The processor can determine a destination based on the determined desirable event or action. The processor can determine a navigation path for navigating the intelligent guidance device to the destination based on the determined destination, the image data, the inertial measurement data or the location data. The processor can determine output data based on the determined navigation path.
대표청구항▼
1. A computing earpiece to be worn over an ear of a user, comprising: an inertial measurement unit (IMU) configured to detect inertial measurement data corresponding to a positioning, velocity, or acceleration of the computing earpiece;a global positioning system (GPS) sensor configured to detect lo
1. A computing earpiece to be worn over an ear of a user, comprising: an inertial measurement unit (IMU) configured to detect inertial measurement data corresponding to a positioning, velocity, or acceleration of the computing earpiece;a global positioning system (GPS) sensor configured to detect location data corresponding to a location of the computing earpiece;at least one camera configured to detect image data corresponding to a surrounding environment of the computing earpiece;an input device configured to receive user input;a speaker configured to output audio information;an antenna configured to transmit a wireless signal to a remote device;a memory configured to store object data regarding a plurality of previously determined objects and corresponding location information, to store previously determined user data associated with the user, and to store danger data corresponding to data that indicates a danger may be present; anda processor coupled to the IMU, the GPS sensor, the at least one camera, and the memory and configured to: determine that a potential danger exists when the image data or the inertial measurement data matches the danger data,control the speaker to output audio data requesting user status information corresponding to an indication of whether the user requests assistance when the potential danger exists,determine that the remote device should be contacted when at least one of the input device receives the user status information indicating that the user requests assistance or the input device fails to receive the user status information,control the antenna to transmit the wireless signal to the remote device when the remote device should be contacted,determine a current location of the computing earpiece based on at least one of the inertial measurement data, the location data, or the image data,determine an identity of an object in the surrounding environment by limiting an object identification search to a subset of the plurality of previously-determined objects based on the current location of the computing earpiece and by analyzing the image data based on the stored object data and the limited identification search,determine an event or an action to be performed based on the identified object, the previously determined user data, and a current time or day,determine a destination based on the determined event or action to be performed,determine a navigation path from a current location of the computing earpiece to the destination based on the determined destination, the image data, and at least one of the inertial measurement data or the location data, andcontrol the speaker to output the audio information based on at least one of the recognized object, the determined event or action to be performed, or the navigation path. 2. The computing earpiece of claim 1, further comprising a vibratory motor coupled to the processor and configured to provide haptic information to the user based on at least one of the identified object, the event or action to be performed, or the navigation path. 3. The computing earpiece of claim 1, wherein the memory is configured to store map data and the processor is configured to determine the navigation path based on the image data, the map data, and at least one of the inertial measurement data or the location data. 4. The computing earpiece of claim 1, wherein the antenna is further configured to establish an audio or video communication with the remote device or another portable electronic device or computer used by another person, wherein the processor is further configured to cause the antenna to establish the audio or video communication based on the determined desirable event or action. 5. The computing earpiece of claim 1, wherein the processor is further configured to detect an obstacle along the navigation path and to control the speaker to output data indicating the presence of the obstacle. 6. The computing earpiece of claim 1, further comprising a microphone coupled to the processor and configured to detect a speech of the user or another person, wherein the processor is further configured to: parse a conversation of the user or the other person into speech elements,analyze the speech elements based on the previously determined user data, anddetermine the event or action to be performed further based on the analyzed speech elements. 7. The computing earpiece of claim 1, wherein the at least one camera includes a stereo pair of cameras configured to detect depth information regarding the surrounding environment. 8. The computing earpiece of claim 1, wherein the at least one camera includes a wide angle camera having a field of view of 120 degrees. 9. The computing earpiece of claim 1, further comprising a microphone coupled to the processor and configured to detect audio data corresponding to at least one of the user or the surrounding environment, wherein the processor is configured to determine the identity of the object in the surrounding environment further based on the detected audio data. 10. The computing earpiece of claim 1, wherein limiting the object identification search based on the current location of the computing earpiece further includes limiting the object identification search to potential objects that may rationally correspond to the current location of the computing earpiece.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (161)
Chao, Hui; Das, Saumitra Mohan; Gupta, Rajarshi; Khorashadi, Behrooz; Sridhara, Vinay; Pakzad, Payam, Adaptive updating of indoor navigation assistance data for use by a mobile device.
Lynt Ingrid H. (7502 Toll Ct. Alexandria VA 22306) Lynt Christopher H. (7502 Toll Ct. Alexandria VA 22306), Apparatus for converting visual images into tactile representations for use by a person who is visually impaired.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Assisting a vision-impaired user with navigation based on a 3D captured image stream.
Janardhanan, Jayawardan; Dutta, Goutam; Tripuraneni, Varun, Attitude estimation for pedestrian navigation using low cost mems accelerometer in mobile applications, and processing methods, apparatus and systems.
Sawan,Mohamad; Harvey,Jean Fran챌ois; Roy,Martin; Coulombe,Jonathan; Savaria,Yvon; Donfack,Colince, Body electronic implant and artificial vision system thereof.
Kramer James P. (Stanford CA) Lindener Peter (E. Palo Alto CA) George William R. (Palo Alto CA), Communication system for deaf, deaf-blind, or non-vocal individuals using instrumented glove.
Kurzweil, Raymond C.; Albrecht, Paul; Gashel, James; Gibson, Lucy; Lvovsky, Lev, Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine.
Hanson Charles M. (Richardson TX) Koester Vaughn J. (Dallas TX) Fallstrom Robert D. (Richardson TX), Head mounted video display and remote camera system.
Strub, Henry B.; Burgess, David A.; Johnson, Kimberly H.; Cohen, Jonathan R.; Reed, David P., Hybrid recording unit including portable video recorder and auxillary device.
Hirsch Hermann (Hirschstrasse 5 A-9021 Klagenfurt (Karnten) ATX) Pichler Heinrich (Sailerackergasse 38/2 A-1190 Wien (Osterreich) ATX), Information system.
Wellner Pierre D.,GBX ; Flynn Michael J.,GBX ; Carter Kathleen A.,GBX ; Newman William M.,GBX, Interactive desktop display system for automatically adjusting pan and zoom functions in response to user adjustment of a feedback image.
Stanford Thomas H. (Escondido CA) Sahne Farhad Noroozi (San Diego CA) Riches Thomas P. (Temecula CA) O\Neill Robert (San Diego CA), Neck engageable transducer support assembly and method of using same.
Jung,Kyung Kwon; Chae,Yeon Sik; Rhee,Jin Koo, Object identification system combined with millimeter-wave passive image system and global positioning system (GPS) for the blind.
Holakovszky Lszl (Beregszsz u.4o/I. Budapest ; 1112 HUX) Endrei Kroly (Fehryri t 86. Budapest 1119 HUX) Kezi Lszl (Zugligeti t 69. Budapest 1121 HUX) Endrei Krolyn (Trogatt 55. Budapest 1021 HUX), Stereoscopic video image display appliance wearable on head like spectacles.
Dieberger, Andreas, System and method for non-visually presenting multi-part information pages using a combination of sonifications and tactile feedback.
Naick, Indran; Spinac, Clifford J.; Sze, Calvin L., Using a display associated with an imaging device to provide instructions to the subjects being recorded.
Lipton Lenny (San Rafael CA) Halnon Jeffrey J. (Richmond CA) Mitchell Larry H. (Cupertino CA) Hursey Robert (Carmel Valley CA), Wireless active eyewear for stereoscopic applications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.