Wearable eyeglasses for providing social and environmental awareness
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G02B-027/01
G06F-003/00
G06F-003/01
G06F-003/16
출원번호
US-0489315
(2014-09-17)
등록번호
US-9922236
(2018-03-20)
발명자
/ 주소
Moore, Douglas A.
Djugash, Joseph M. A.
Ota, Yasuhiro
출원인 / 주소
TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
대리인 / 주소
Snell & Wilmer L.L.P.
인용정보
피인용 횟수 :
0인용 특허 :
172
초록▼
Eyeglasses include a left lens, a right lens and an IMU sensor and a GPS unit. A camera and a memory are coupled to the eyeglasses. A processor is connected to the IMU, the GPS unit and the at least one camera and is adapted to recognize objects by analyzing image data based on the stored object dat
Eyeglasses include a left lens, a right lens and an IMU sensor and a GPS unit. A camera and a memory are coupled to the eyeglasses. A processor is connected to the IMU, the GPS unit and the at least one camera and is adapted to recognize objects by analyzing image data based on the stored object data and inertial measurement data or location data. The processor is also adapted to determine a desirable event based on the object, previously determined user data, and a time. The processor is also adapted to determine a destination based on the determined desirable event and determine a navigation path for navigating the eyeglasses to the destination based on the determined destination, image data, and inertial measurement data or location data. The processor is also adapted to determine output data based on the determined navigation path. A speaker is also provided.
대표청구항▼
1. A wearable computing device having an eyeglasses form and designed to be worn by a user, comprising: a body having a frame, a left lens, and a right lens;an inertial measurement unit (IMU) attached to the body and configured to detect inertial measurement data corresponding to a positioning, velo
1. A wearable computing device having an eyeglasses form and designed to be worn by a user, comprising: a body having a frame, a left lens, and a right lens;an inertial measurement unit (IMU) attached to the body and configured to detect inertial measurement data corresponding to a positioning, velocity, or acceleration of the body;a global positioning system (GPS) sensor attached to the body and configured to detect global positioning data corresponding to a global position of the body;at least one camera attached to the body and configured to detect image data corresponding to a surrounding environment of the body and a moving object or person in the surrounding environment;a memory attached to the body and configured to store object data regarding previously determined objects, previously determined user data associated with the user, and a preferred distance between the body and the moving object or person;a processor attached to the body and electrically coupled to the IMU, the GPS sensor, the at least one camera, and the memory, and configured to: determine a current location of the body based on at least one of the inertial measurement data, the global positioning data, or the image data,recognize an object in the surrounding environment by limiting an object identification search based on the current location of the body and by analyzing the image data based on the stored object data and the limited object identification search,determine an event or action to be performed based on the recognized object, the previously determined user data, and a current time or day,determine a destination based on the determined event or action to be performed,determine a navigation path from the current location of the body to the destination based on the determined destination, the image data, and at least one of the inertial measurement data or the global positioning data,determine a current distance between the body and the moving object or person,determine that a current speed of the body should increase when the current distance between the body and the moving object or person is greater than the preferred distance, anddetermine that the current speed of the body should decrease when the current distance between the body and the moving object or person is less than the preferred distance; anda speaker attached to the body, electrically coupled to the processor, and configured to: provide audio information to the user based on at least one of the recognized object, the determined event or action to be performed, or the navigation path,provide audio information to the user to increase a current walking speed when the processor determines that the current speed of the body should increase, andprovide audio information to the user to decrease the current walking speed when the processor determines that the current speed of the body should decrease. 2. The wearable computing device of claim 1 further comprising a vibratory motor coupled to the body and configured to provide haptic information to the user based on the determined output data. 3. The wearable computing device of claim 1 wherein the memory is configured to store map data and the processor is further configured to determine the navigation path based on the image data, the map data, and the at least one of the global positioning data or the inertial measurement data. 4. The wearable computing device of claim 1 further comprising a wireless communication antenna for establishing an audio or video communication with a remote portable electronic device or a remote computer, wherein the processor is further configured to establish the audio or video communication based on the determined event or action to be performed. 5. The wearable computing device of claim 1 further comprising a first connector adapted to electrically couple the left lens to the frame and a second connector adapted to electrically couple the right lens to the frame. 6. The wearable computing device of claim 1 further comprising a microphone coupled to the body, electrically coupled to the processor, and configured to detect a speech of the user or another person, wherein the processor is further configured to: parse a conversation of the user or the another person into speech elements,analyze the speech elements based on the previously determined user data, anddetermine the event or action to be performed further based on the analyzed speech elements. 7. The wearable computing device of claim 1 wherein the at least one camera includes a stereo camera pair configured to detect depth information regarding the surrounding environment. 8. The wearable computing device of claim 1 wherein the at least one camera includes a wide angle camera configured to detect image data within a 120 degree field of view. 9. A method for providing continuous social and environmental awareness by a wearable computing device having an eyeglass form comprising: detecting, via an IMU, inertial measurement data corresponding to a positioning, velocity, or acceleration of the wearable computing device;detecting, via a GPS sensor, global positioning data corresponding to a global position of the wearable computing device;detecting, via a camera, image data corresponding to a surrounding environment of the wearable computing device and a moving object or person in the surrounding environment;storing, in a memory, object data corresponding to previously determined objects, previously determined user data regarding a user, and a preferred distance between the wearable computing device and the moving object or person;determining, by a processor, a currently location of the wearable computing device based on at least one of the inertial measurement data, the global positioning data, or the image data;recognizing, by the processor, an object in the surrounding environment by limiting an object identification search based on the current location of the wearable computing device and by analyzing the image data based on the stored object data and the limited object identification search;determining, by the processor: an event or action to be performed based on the recognized object, the previously determined user data, and a current time or day,a destination based on the determined desirable event or action,a navigation path from the current location of the wearable computing device to the destination based on the determined destination, the image data, and at least one of the inertial measurement data or the global positioning data,a current distance between the wearable computing device and the moving object or person,that a current speed of the wearable computing device should increase when the current distance between the wearable computing device and the moving object or person is greater than the preferred distance, andthat the current speed of the wearable computing device should decrease when the current distance between the wearable computing device and the moving object or person is less than the preferred distance;providing, via a speaker or a vibration unit, audio or haptic information to the user based on at least one of the recognized object, the determined event or action to be performed, or the navigation path; andproviding, via the speaker or the vibration unit, additional audio or haptic information to the user to increase a current walking speed when the processor determines that the current speed of the wearable computing device should increase, and to decrease the current walking speed when the processor determines that the current speed of the wearable computing device should decrease. 10. The method of claim 9 further comprising determining, by the processor, divergence data between the object data and the image data, wherein providing audio or haptic information to the user further includes providing audio or haptic information based on the divergence data. 11. The method of claim 9 wherein providing audio or haptic information to the user includes providing stereo haptic information. 12. The method of claim 9 further comprising transmitting, via an antenna, the image data and at least one of the inertial measurement data or the global positioning data to a remote device, wherein the processor is located on the remote device. 13. The method of claim 9 further comprising storing, in the memory, map data, wherein determining the navigation path includes determining the navigation path based on the image data, the map data, and at least one of the global positioning data or the inertial measurement data. 14. The method of claim 9 wherein storing the object data includes storing the object data in a remote database that can be accessed by other electronic devices. 15. A wearable computing device having an eyeglass form to be worn by a user, comprising: a body having a frame, a right lens, and a left lens;an inertial measurement unit (IMU) attached to the body and configured to detect inertial measurement data corresponding to a positioning, velocity, or acceleration of the body;a global positioning system (GPS) sensor attached to the body and configured to detect global positioning data corresponding to a global position of the body;at least one camera positioned on at least one of the right lens or the left lens, attached to the body, and configured to detect image data corresponding to a surrounding environment of the body and a moving object or person in the surrounding environment;a memory attached to the body and configured to store object data regarding previously determined objects, previously determined user data associated with the user, and a preferred distance between the body and the moving object or person;an antenna attached to the body and configured to transmit the image data, the inertial measurement data, the global positioning data and the object data to a remote processor and to receive processed data from the remote processor, the remote processor configured to: determine a current location of the body based on at least one of the inertial measurement data, the global positioning data, or the image data;recognize an object in the surrounding environment by limiting an object identification search based on the current location of the body and by analyzing the image data based on the stored object data and the limited object identification search,determine an event or action to be performed based on the recognized object, the previously determined user data, and a current time or day,determine a destination based on the determined desirable event or action,determine a navigation path from the current location of the body to the destination based on the determined destination, the image data, and at least one of the inertial measurement data or the global positioning data,determine a current distance between the body and the moving object or person,determine that a current speed of the body should increase when the current distance between the body and the moving object or person is greater than the preferred distance, anddetermine that the current speed of the body should decrease when the current distance between the body and the moving object or person is less than the preferred distance; anda speaker configured to: provide audio information to the user based on at least one of the recognized object, the determined desirable event or action, or the navigation path,provide audio information to the user to increase a current walking speed when the remote processor determines that the current speed of the body should increase, andprovide audio information to the user to decrease the current walking speed when the remote processor determines that the current speed of the body should decrease. 16. The wearable computing device of claim 15 further comprising a vibratory motor coupled to the body and configured to provide haptic information to the user based on at least one of the recognized object, the determined desirable event or action, or the navigation path. 17. The wearable computing device of claim 15 wherein the memory is configured to store map data and the remote processor is further configured to determine the navigation path based on the image data, the map data, and at least one of the global positioning data or the inertial measurement data. 18. The wearable computing device of claim 15 further comprising a first connector adapted to electrically attach the left lens to the frame and a second connector adapted to electrically attach the right lens to the frame. 19. The wearable computing device of claim 15, further comprising a microphone configured to detect a speech of the user or another person, wherein the remote processor is further configured to: parse a conversation of the user or the another person into speech elements,analyze the speech elements based on the previously determined user data, anddetermine the event or action to be performed further based on the analyzed speech elements. 20. The wearable computing device of claim 15 wherein the at least one camera includes a stereo camera pair configured to detect depth information regarding the surrounding environment.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (172)
Chao, Hui; Das, Saumitra Mohan; Gupta, Rajarshi; Khorashadi, Behrooz; Sridhara, Vinay; Pakzad, Payam, Adaptive updating of indoor navigation assistance data for use by a mobile device.
Lynt Ingrid H. (7502 Toll Ct. Alexandria VA 22306) Lynt Christopher H. (7502 Toll Ct. Alexandria VA 22306), Apparatus for converting visual images into tactile representations for use by a person who is visually impaired.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Assisting a vision-impaired user with navigation based on a 3D captured image stream.
Janardhanan, Jayawardan; Dutta, Goutam; Tripuraneni, Varun, Attitude estimation for pedestrian navigation using low cost mems accelerometer in mobile applications, and processing methods, apparatus and systems.
Sawan,Mohamad; Harvey,Jean Fran챌ois; Roy,Martin; Coulombe,Jonathan; Savaria,Yvon; Donfack,Colince, Body electronic implant and artificial vision system thereof.
Kramer James P. (Stanford CA) Lindener Peter (E. Palo Alto CA) George William R. (Palo Alto CA), Communication system for deaf, deaf-blind, or non-vocal individuals using instrumented glove.
Kurzweil, Raymond C.; Albrecht, Paul; Gashel, James; Gibson, Lucy; Lvovsky, Lev, Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine.
Hanson Charles M. (Richardson TX) Koester Vaughn J. (Dallas TX) Fallstrom Robert D. (Richardson TX), Head mounted video display and remote camera system.
Strub, Henry B.; Burgess, David A.; Johnson, Kimberly H.; Cohen, Jonathan R.; Reed, David P., Hybrid recording unit including portable video recorder and auxillary device.
Hirsch Hermann (Hirschstrasse 5 A-9021 Klagenfurt (Karnten) ATX) Pichler Heinrich (Sailerackergasse 38/2 A-1190 Wien (Osterreich) ATX), Information system.
Wellner Pierre D.,GBX ; Flynn Michael J.,GBX ; Carter Kathleen A.,GBX ; Newman William M.,GBX, Interactive desktop display system for automatically adjusting pan and zoom functions in response to user adjustment of a feedback image.
Kretsch Mary J. (Vallejo CA) Gunn Moira A. (San Francisco CA) Fong Alice K. (San Francisco CA), Method and system for measurement of intake of foods, nutrients and other food components in the diet.
Stanford Thomas H. (Escondido CA) Sahne Farhad Noroozi (San Diego CA) Riches Thomas P. (Temecula CA) O\Neill Robert (San Diego CA), Neck engageable transducer support assembly and method of using same.
Jung,Kyung Kwon; Chae,Yeon Sik; Rhee,Jin Koo, Object identification system combined with millimeter-wave passive image system and global positioning system (GPS) for the blind.
Holakovszky Lszl (Beregszsz u.4o/I. Budapest ; 1112 HUX) Endrei Kroly (Fehryri t 86. Budapest 1119 HUX) Kezi Lszl (Zugligeti t 69. Budapest 1121 HUX) Endrei Krolyn (Trogatt 55. Budapest 1021 HUX), Stereoscopic video image display appliance wearable on head like spectacles.
Dieberger, Andreas, System and method for non-visually presenting multi-part information pages using a combination of sonifications and tactile feedback.
Naick, Indran; Spinac, Clifford J.; Sze, Calvin L., Using a display associated with an imaging device to provide instructions to the subjects being recorded.
Lipton Lenny (San Rafael CA) Halnon Jeffrey J. (Richmond CA) Mitchell Larry H. (Cupertino CA) Hursey Robert (Carmel Valley CA), Wireless active eyewear for stereoscopic applications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.