Smart necklace with stereo vision and onboard processing
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G01C-021/36
G06F-003/01
G01C-021/20
A61H-003/06
H04N-013/02
출원번호
US-0562478
(2014-12-05)
등록번호
US-9915545
(2018-03-13)
발명자
/ 주소
Chen, Tiffany L.
Djugash, Joseph M.A.
Yamamoto, Kenichi
출원인 / 주소
TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
대리인 / 주소
Snell & Wilmer LLP
인용정보
피인용 횟수 :
0인용 특허 :
172
초록▼
A method for providing directions to a blind user of a smart device is described. The method includes detecting, by at least two sensors and in response to a selection of a find mode of the smart device, image data corresponding to a surrounding environment of the smart device and positioning data c
A method for providing directions to a blind user of a smart device is described. The method includes detecting, by at least two sensors and in response to a selection of a find mode of the smart device, image data corresponding to a surrounding environment of the smart device and positioning data corresponding to a positioning of the smart device. The method also includes receiving, by an input device, the desired object or the desired location. The method also includes determining, by a processor, the initial location of the smart device based on the image data, the positioning data and map data stored in a memory of the smart device. The method also includes providing, by the output device, the directions to the desired object based on the initial location of the smart device and the map data.
대표청구항▼
1. A method for providing directions to a blind user of an electronic device, the directions being from a current location of the electronic device to a location of a desired object or a desired location, the method comprising: detecting, by a camera and an inertial measurement unit and in response
1. A method for providing directions to a blind user of an electronic device, the directions being from a current location of the electronic device to a location of a desired object or a desired location, the method comprising: detecting, by a camera and an inertial measurement unit and in response to a selection of a find mode of the electronic device, image data corresponding to a surrounding environment of the electronic device and inertial measurement data corresponding to a movement of the electronic device, respectively;receiving, by an input device, user input including an identifier of the desired object or the desired location and additional input corresponding to a desired output;determining, by a processor, whether the desired output includes navigation instructions from the current location of the electronic device to the desired object or the desired location, or a relative location of the desired object or the desired location from the current location of the electronic device based on the additional input corresponding to the desired output;storing, in a memory, map data corresponding to an environment of the electronic device;determining, by a processor, at least one of a current location of the electronic device or a current location of the desired object or the desired location based on the image data;determining, by the processor, the navigation instructions from the current location of the electronic device to the desired object or the desired location based on the map data and the determined at least one of the current location of the electronic device or the current location of the desired object or the desired location;determining, by the processor, movement of the electronic device based on the inertial measurement data;updating, by the processor, the navigation instructions based on the movement of the electronic device;determining, by the processor, the relative location of the desired object or the desired location by comparing the current location of the electronic device to the location of the desired object or the desired location;providing, by an output device, the navigation instructions when the desired output includes the navigation instructions; andproviding, by the output device, the relative location of the desired object or the desired location when the desired output includes the relative location. 2. The method of claim 1 further comprising generating, by the processor, the map data based on the image data and the inertial measurement data. 3. The method of claim 1 further comprising: determining, by the processor, that the desired object or the desired location fails to match data stored in the memory;requesting, by the processor, additional data identifying the desired object or the desired location from the blind user; anddetermining, by the processor, the navigation instructions to the desired object based on the current location of the electronic device, the map data, and the additional data identifying the desired object or the desired location. 4. The method of claim 1 wherein the output device includes a first vibration unit positioned on a first side of the electronic device and a second vibration unit positioned on a second side of the electronic device, and the directions include vibrational patterns output from the first vibration unit and the second vibration unit. 5. The method of claim 1 wherein the output device includes a speaker and the directions are provided by the speaker as speech. 6. The method of claim 1 further comprising determining, by the processor, that an obstacle exists in a current path of the electronic device based on the image data and wherein updating the navigation instructions further includes determining alternative routes that cause the blind user to avoid the obstacle. 7. A method for describing at least one object or person within a predetermined distance and angle of an electronic device, the method comprising: storing, in a memory, stored image data associated with a plurality of stored objects and people and a plurality of identifiers such that each identifier is associated with a stored object or person;receiving, by an input device, input data including a selection of identification of a single object or person within the predetermined distance and angle of the electronic device or a selection of identification of multiple objects or people within the predetermined distance and angle of the electronic device;receiving, by the input device, a granularity setting of the electronic device;detecting, by a camera and in response to a selection of an explore mode or a scan mode of the electronic device, detected image data corresponding to at least one object or person at a single point in time;determining, by a processor and when the input data includes the selection of the identification of the single object or person, the single object or person to be identified and a single identifier from the plurality of identifiers corresponding to the single object or person within the predetermined distance and angle of the camera based on the detected image data and the stored image data;outputting, via a speaker, the single identifier;determining, by the processor and when the input data includes the selection of identification of multiple objects or people, multiple identifiers from the plurality of identifiers including a first identifier corresponding to a first object or person of the multiple objects or people, a second identifier corresponding to a second object or person of the multiple objects or people, and a third identifier corresponding to a third object or person of the multiple objects or people, the first object, the second object, and the third object being detected at the single point in time within the predetermined distance and angle of the camera based on the detected image data and the stored image data; andoutputting, via the speaker, the first identifier and the second identifier only when the granularity setting is a first setting, and the first identifier, the second identifier, and the third identifier when the granularity setting is a second setting indicating a greater granularity than the first setting. 8. The method of claim 7 wherein the predetermined distance and angle of the camera is less than a field of view of the camera. 9. The method of claim 7 wherein the input device includes the camera and receiving the input data that includes the selection of the identification of the single object or person includes receiving a user gesture indicating that the single object or person is selected for identification. 10. The method of claim 7 wherein: outputting the first identifier, the second identifier, and the third identifier in an order that is based on a directional order of the first object or person, the second object or person and the third object or person. 11. The method of claim 7 wherein the camera includes a pair of stereo cameras, the detected image data includes depth information, and outputting the single identifier further includes outputting the depth information. 12. The method of claim 7 further comprising: receiving, at the processor, a request for directions to the single object or person;determining, by the processor, directions from a current location of the electronic device to a location of the single object or person; andoutputting, via the speaker, the directions. 13. The method of claim 7 wherein the predetermined distance and the predetermined angle vary based on whether the explore mode or the scan mode were selected. 14. A method for storing a first location of an electronic device and providing directions to a blind user from a second location of the electronic device to the first location of the electronic device, the method comprising: determining, by a processor, whether a label mode is selected or whether a temporary location store mode is selected;detecting, by a positioning sensor, a first positioning data corresponding to the first location of the electronic device and a second positioning data corresponding to the second location of the electronic device;storing, in a memory and in response to a first depression of a capture button of the electronic device when the temporary location store mode is selected, the first positioning data on a map;detecting, by the positioning sensor, additional positioning data as the electronic device moves from the first location towards the second location;determining, by the processor and in response to a second depression of the capture button of the electronic device when the temporary location store mode is selected, directions to the first location from the second location based on the first positioning data, the second positioning data, and the additional positioning data;providing, by an output device, the directions in response to the second depression of the capture button of the electronic device when the temporary location store mode is selected;detecting, by the positioning sensor or a camera, additional data corresponding to a location, object, or person;determining, by the processor and in response to a third depression the capture button when the label mode is selected, a label corresponding to the location, object, or person;storing, in the memory, the additional data and the label corresponding to the location, object, or person; andassociating, in the memory, the additional data with the label. 15. The method of claim 14 further comprising detecting, by the camera and in response to the second depression of the capture button, image data corresponding to an environment of the user and wherein determining the directions includes determining the directions based on the image data. 16. The method of claim 15 wherein determining the directions includes: determining a route between the first location and the second location;determining an obstacle along the route based on the image data; anddetermining an alternate route based on the obstacle. 17. A method for describing at least one object or person within a predetermined distance and angle of an electronic device to a blind user of the electronic device, the method comprising: storing, in a memory, stored image data associated with a plurality of stored objects and people and a plurality of identifiers such that each identifier is associated with a stored object or person;detecting, by a camera and in response to a selection of an explore mode or a scan mode of the electronic device, detected image data corresponding to the at least one object or person;determining, by a processor, at least a first identifier from the plurality of identifiers corresponding to the at least one object or person within the predetermined distance and angle of the camera based on the detected image data and the stored image data;outputting, via a speaker, the at least first identifier;determining, by the processor and in response to the selection of the explore mode, if the explore mode has been canceled;detecting, by the camera and in response to the explore mode not being canceled, a second image data of at least another object or person after a predetermined period of time;determining by the processor, at least a second identifier from the plurality of identifiers corresponding to the at least another object or person within the predetermined distance and angle of the camera based on the second image data and the stored image data; andoutputting, via the speaker, the at least second identifier.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (172)
Chao, Hui; Das, Saumitra Mohan; Gupta, Rajarshi; Khorashadi, Behrooz; Sridhara, Vinay; Pakzad, Payam, Adaptive updating of indoor navigation assistance data for use by a mobile device.
Lynt Ingrid H. (7502 Toll Ct. Alexandria VA 22306) Lynt Christopher H. (7502 Toll Ct. Alexandria VA 22306), Apparatus for converting visual images into tactile representations for use by a person who is visually impaired.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Assisting a vision-impaired user with navigation based on a 3D captured image stream.
Janardhanan, Jayawardan; Dutta, Goutam; Tripuraneni, Varun, Attitude estimation for pedestrian navigation using low cost mems accelerometer in mobile applications, and processing methods, apparatus and systems.
Sawan,Mohamad; Harvey,Jean Fran챌ois; Roy,Martin; Coulombe,Jonathan; Savaria,Yvon; Donfack,Colince, Body electronic implant and artificial vision system thereof.
Kramer James P. (Stanford CA) Lindener Peter (E. Palo Alto CA) George William R. (Palo Alto CA), Communication system for deaf, deaf-blind, or non-vocal individuals using instrumented glove.
Kurzweil, Raymond C.; Albrecht, Paul; Gashel, James; Gibson, Lucy; Lvovsky, Lev, Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine.
Hanson Charles M. (Richardson TX) Koester Vaughn J. (Dallas TX) Fallstrom Robert D. (Richardson TX), Head mounted video display and remote camera system.
Strub, Henry B.; Burgess, David A.; Johnson, Kimberly H.; Cohen, Jonathan R.; Reed, David P., Hybrid recording unit including portable video recorder and auxillary device.
Hirsch Hermann (Hirschstrasse 5 A-9021 Klagenfurt (Karnten) ATX) Pichler Heinrich (Sailerackergasse 38/2 A-1190 Wien (Osterreich) ATX), Information system.
Wellner Pierre D.,GBX ; Flynn Michael J.,GBX ; Carter Kathleen A.,GBX ; Newman William M.,GBX, Interactive desktop display system for automatically adjusting pan and zoom functions in response to user adjustment of a feedback image.
Kretsch Mary J. (Vallejo CA) Gunn Moira A. (San Francisco CA) Fong Alice K. (San Francisco CA), Method and system for measurement of intake of foods, nutrients and other food components in the diet.
Stanford Thomas H. (Escondido CA) Sahne Farhad Noroozi (San Diego CA) Riches Thomas P. (Temecula CA) O\Neill Robert (San Diego CA), Neck engageable transducer support assembly and method of using same.
Jung,Kyung Kwon; Chae,Yeon Sik; Rhee,Jin Koo, Object identification system combined with millimeter-wave passive image system and global positioning system (GPS) for the blind.
Holakovszky Lszl (Beregszsz u.4o/I. Budapest ; 1112 HUX) Endrei Kroly (Fehryri t 86. Budapest 1119 HUX) Kezi Lszl (Zugligeti t 69. Budapest 1121 HUX) Endrei Krolyn (Trogatt 55. Budapest 1021 HUX), Stereoscopic video image display appliance wearable on head like spectacles.
Dieberger, Andreas, System and method for non-visually presenting multi-part information pages using a combination of sonifications and tactile feedback.
Naick, Indran; Spinac, Clifford J.; Sze, Calvin L., Using a display associated with an imaging device to provide instructions to the subjects being recorded.
Lipton Lenny (San Rafael CA) Halnon Jeffrey J. (Richmond CA) Mitchell Larry H. (Cupertino CA) Hursey Robert (Carmel Valley CA), Wireless active eyewear for stereoscopic applications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.