[미국특허]
Wearable clip for providing social and environmental awareness
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G01C-021/36
H04W-004/02
G08B-006/00
H04N-007/18
G06K-009/46
G06K-009/00
G01S-019/13
G01S-019/14
G01S-019/49
G08B-013/24
G08B-025/08
G08B-003/10
G01C-021/16
G01C-021/20
출원번호
US-0489420
(2014-09-17)
등록번호
US-10024678
(2018-07-17)
발명자
/ 주소
Moore, Douglas A.
Djugash, Joseph M. A.
Ota, Yasuhiro
출원인 / 주소
TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
대리인 / 주소
Snell & Wilmer LLP
인용정보
피인용 횟수 :
0인용 특허 :
172
초록▼
A clip includes an IMU coupled to the clip and adapted to detect inertial measurement data and a GPS coupled to the device and adapted to detect location data. The clip further includes a camera adapted to detect image data and a memory adapted to store data. The clip further includes a processor ad
A clip includes an IMU coupled to the clip and adapted to detect inertial measurement data and a GPS coupled to the device and adapted to detect location data. The clip further includes a camera adapted to detect image data and a memory adapted to store data. The clip further includes a processor adapted to recognize an object in the surrounding environment by analyzing the data. The processor can determine a desirable action based on the data and a current time or day. The processor can determine a destination based on the determined desirable action. The processor can determine a navigation path based on the determined destination and the data. The processor is further adapted to determine output based on the navigation path. The clip further includes a speaker adapted to provide audio information to the user.
대표청구항▼
1. An intelligent clip to be worn by a user, comprising: a housing having a front portion, a back portion, a channel positioned on the back portion and configured to receive a connection to a fastening device, a top portion, and a bottom portion; andone or more components encased within the housing
1. An intelligent clip to be worn by a user, comprising: a housing having a front portion, a back portion, a channel positioned on the back portion and configured to receive a connection to a fastening device, a top portion, and a bottom portion; andone or more components encased within the housing and including:an inertial measurement unit (IMU) sensor configured to detect inertial measurement data corresponding to a positioning, a speed, a direction of travel or an acceleration of the intelligent clip,at least one camera including a first camera positioned on the front portion, the first camera being configured to detect image data corresponding to a surrounding environment,a memory configured to store object data regarding previously determined objects and previously determined user data associated with the user,a processor connected to the IMU sensor and the at least one camera and configured to: recognize an object in the surrounding environment based on the detected image data and the stored object data,determine a desirable event or action based on the recognized object, the previously determined user data, and a current time or day,determine a destination based on the determined desirable event or action,determine a plurality of navigation paths based on the determined destination,filter the plurality of navigation paths based on the inertial measurement data including the speed and the direction of travel to determine a navigation path of the plurality of navigation paths for the user to travel, anddetermine output data based on the determined navigation path; anda speaker or a vibration unit configured to provide the output data to the user. 2. The intelligent clip of claim 1 wherein the vibration unit is configured to provide haptic information to the user based on the output data. 3. The intelligent clip of claim 1 wherein the memory is configured to store map data and the processor is configured to determine the navigation path based on the stored map data. 4. The intelligent clip of claim 1 wherein the one or more components include: a wireless communication antenna for establishing an audio or video communication with another portable electronic device or computer used by another person,wherein the processor is further configured to establish the audio or video communication based on the determined desirable event or action. 5. The intelligent clip of claim 1 wherein the one or more components further include a microphone that is configured to detect a speech of the user or another person, wherein the processor is further configured to: parse a conversation of the user or the another person into speech elements,analyze the speech elements based on the previously determined user data, anddetermine the desirable event or action further based on the analyzed speech elements. 6. The intelligent clip of claim 1 wherein the processor is further configured to determine the desirable event or action based on the detected inertial measurement data. 7. The intelligent clip of claim 1 wherein the at least one camera includes a second camera facing a second direction, wherein the first camera is facing a first direction. 8. The intelligent clip of claim 1 wherein the processor is configured to filter the detected image data and transfer the filtered image data to a remote processor via a wireless communication antenna. 9. The intelligent clip of claim 1 wherein the at least one camera includes a stereo pair of cameras that are further configured to detect depth information. 10. A method for providing continuous social and environmental awareness by an intelligent clip comprising: detecting, via a camera or an inertial measurement unit (IMU) sensor, inertial measurement data corresponding to a positioning, a speed, a direction of travel, or an acceleration of the intelligent clip, or image data corresponding to a surrounding environment;storing, in a memory, object data regarding previously determined objects and previously determined user data regarding a user;recognizing, by a processor, an object in the surrounding environment based on the detected image data, the stored object data and the inertial measurement data including the speed and the direction of travel of the intelligent clip;determining, by the processor, a desirable event or action based on the recognized object, the previously determined user data, and a current time or day;determining, by the processor, a destination based on the determined desirable event or action;determining, by the processor, a plurality of navigation paths based on the determined destination;filtering, by the processor, the plurality of navigation paths based on the inertial measurement data including the speed and the direction of travel to determine a navigation path of the plurality of navigation paths for the user to travel;determining, by the processor, output data based on the determined navigation path; andproviding, via a speaker or a vibration unit, the output data to the user. 11. The method of claim 10 wherein: providing the output data includes providing audio or haptic information based on divergence data between the object data and the image data. 12. The method of claim 10 wherein the desirable event or action includes alerting the user of an approaching hazard. 13. The method of claim 10 further including transmitting, via an antenna, the image data and the inertial measurement data to a remote device for processing. 14. The method of claim 10 further comprising: storing, in the memory, map data; anddetermining the navigation path is based on the stored map data. 15. The method of claim 10 wherein storing the object data includes storing the object data in a remote database accessible by other intelligent devices. 16. An intelligent clip to be worn by a user, comprising: a front;a back;a first side;a second side;an inertial measurement unit (IMU) sensor configured to detect inertial measurement data corresponding to a positioning, a speed, a direction of travel or an acceleration of the intelligent clip;a first camera positioned on the front of the intelligent clip and configured to detect a first image data corresponding to a surrounding environment;a second camera positioned on the first side or the second side and configured to detect a second image data corresponding to the surrounding environment;a memory configured to store object data regarding previously determined objects and previously determined user data associated with the user;a processor connected to the IMU sensor, the first camera and the second camera and configured to: recognize an object in the surrounding environment based on the first image data, the second image data, the stored object data and the inertial measurement data including the speed and the direction of travel of the intelligent clip,determine a desirable event or action based on the recognized object, the previously determined user data, and a current time or day,determine a destination based on the determined desirable event or action,determine a plurality of navigation paths based on the determined destination,filter the plurality of navigation paths based on the inertial measurement data including the speed and the direction of travel to determine a navigation path of the plurality of navigation paths for the user to travel, anddetermine output data based on the determined navigation path; and a speaker or a vibration unit configured to provide the output data to the user. 17. The intelligent clip of claim 16 wherein the second camera has a lower focal length than the first camera and is configured to read fine print. 18. The intelligent clip of claim 16 further comprising a button configured to be used as an input device for selection of a mode of operation. 19. The intelligent clip of claim 16 further comprising a wireless communication antenna configured to communicate with a remote processor. 20. The intelligent clip of claim 16 wherein the memory is accessible remotely by multiple smart devices.
Chao, Hui; Das, Saumitra Mohan; Gupta, Rajarshi; Khorashadi, Behrooz; Sridhara, Vinay; Pakzad, Payam, Adaptive updating of indoor navigation assistance data for use by a mobile device.
Lynt Ingrid H. (7502 Toll Ct. Alexandria VA 22306) Lynt Christopher H. (7502 Toll Ct. Alexandria VA 22306), Apparatus for converting visual images into tactile representations for use by a person who is visually impaired.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Assisting a vision-impaired user with navigation based on a 3D captured image stream.
Janardhanan, Jayawardan; Dutta, Goutam; Tripuraneni, Varun, Attitude estimation for pedestrian navigation using low cost mems accelerometer in mobile applications, and processing methods, apparatus and systems.
Sawan,Mohamad; Harvey,Jean Fran챌ois; Roy,Martin; Coulombe,Jonathan; Savaria,Yvon; Donfack,Colince, Body electronic implant and artificial vision system thereof.
Kramer James P. (Stanford CA) Lindener Peter (E. Palo Alto CA) George William R. (Palo Alto CA), Communication system for deaf, deaf-blind, or non-vocal individuals using instrumented glove.
Kurzweil, Raymond C.; Albrecht, Paul; Gashel, James; Gibson, Lucy; Lvovsky, Lev, Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine.
Hanson Charles M. (Richardson TX) Koester Vaughn J. (Dallas TX) Fallstrom Robert D. (Richardson TX), Head mounted video display and remote camera system.
Strub, Henry B.; Burgess, David A.; Johnson, Kimberly H.; Cohen, Jonathan R.; Reed, David P., Hybrid recording unit including portable video recorder and auxillary device.
Hirsch Hermann (Hirschstrasse 5 A-9021 Klagenfurt (Karnten) ATX) Pichler Heinrich (Sailerackergasse 38/2 A-1190 Wien (Osterreich) ATX), Information system.
Wellner Pierre D.,GBX ; Flynn Michael J.,GBX ; Carter Kathleen A.,GBX ; Newman William M.,GBX, Interactive desktop display system for automatically adjusting pan and zoom functions in response to user adjustment of a feedback image.
Kretsch Mary J. (Vallejo CA) Gunn Moira A. (San Francisco CA) Fong Alice K. (San Francisco CA), Method and system for measurement of intake of foods, nutrients and other food components in the diet.
Stanford Thomas H. (Escondido CA) Sahne Farhad Noroozi (San Diego CA) Riches Thomas P. (Temecula CA) O\Neill Robert (San Diego CA), Neck engageable transducer support assembly and method of using same.
Jung,Kyung Kwon; Chae,Yeon Sik; Rhee,Jin Koo, Object identification system combined with millimeter-wave passive image system and global positioning system (GPS) for the blind.
Holakovszky Lszl (Beregszsz u.4o/I. Budapest ; 1112 HUX) Endrei Kroly (Fehryri t 86. Budapest 1119 HUX) Kezi Lszl (Zugligeti t 69. Budapest 1121 HUX) Endrei Krolyn (Trogatt 55. Budapest 1021 HUX), Stereoscopic video image display appliance wearable on head like spectacles.
Dieberger, Andreas, System and method for non-visually presenting multi-part information pages using a combination of sonifications and tactile feedback.
Naick, Indran; Spinac, Clifford J.; Sze, Calvin L., Using a display associated with an imaging device to provide instructions to the subjects being recorded.
Lipton Lenny (San Rafael CA) Halnon Jeffrey J. (Richmond CA) Mitchell Larry H. (Cupertino CA) Hursey Robert (Carmel Valley CA), Wireless active eyewear for stereoscopic applications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.