Smart necklace with stereo vision and onboard processing
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G01C-021/20
H04N-013/296
H04N-013/239
출원번호
US-0480590
(2014-09-08)
등록번호
US-10248856
(2019-04-02)
발명자
/ 주소
Moore, Douglas A.
Djugash, Joseph M. A.
Ota, Yasuhiro
출원인 / 주소
TOYOTA MOTOR ENGINEERING & MANUFACTURING NORTH AMERICA, INC.
대리인 / 주소
Snell & Wilmer LLP
인용정보
피인용 횟수 :
0인용 특허 :
172
초록▼
A wearable neck device and a method of operating the wearable neck device are provided for outputting optical character recognition information to a user. The wearable neck device has at least one camera, and a memory storing optical character or image recognition processing data. A processor detect
A wearable neck device and a method of operating the wearable neck device are provided for outputting optical character recognition information to a user. The wearable neck device has at least one camera, and a memory storing optical character or image recognition processing data. A processor detects a document in the surrounding environment and adjusts the field of view of the at least one camera such that the detected document is within the adjusted field of view. The processor analyzes the image data within the adjusted field of view using the optical character or image recognition processing data. The processor determines output data based on the analyzed image data. A speaker of the wearable neck device provides audio information to the user based on the output data.
대표청구항▼
1. A wearable neck device for providing optical character or image recognition information to a user, comprising: a band having a left portion, a right portion, and a central portion connecting the left portion and the right portion;an inertial measurement unit (IMU) that is configured to detect IMU
1. A wearable neck device for providing optical character or image recognition information to a user, comprising: a band having a left portion, a right portion, and a central portion connecting the left portion and the right portion;an inertial measurement unit (IMU) that is configured to detect IMU data including a body posture of the user;at least one camera connected to the band, having a field of view, and configured to detect image data corresponding to a surrounding environment of the user;a memory storing optical character or image recognition processing data corresponding to an algorithm or a set of instructions for identifying characters or images of documents;a processor connected to the IMU, the memory and the at least one camera, and configured to: determine, using the image data, a location of the user,determine, using the IMU data, the body posture of the user,detect, using the image data, a plurality of documents in the surrounding environment of the user based on the location of the user,determine, using the image data and IMU data, that the plurality of documents are of interest to the user based on the determined location and the determined body posture of the user relative to a location of the plurality of documents in the surrounding environment of the user, and in response:recognize a document that is among the plurality of documents that are of interest and in the surrounding environment of the user,provide a description of the document that identifies the document to the user,receive a user selection of the document that is among the plurality of documents that are of interest and in the surrounding environment of the user in response to providing the description of the document to the user,select the document that is among the plurality of documents in the surrounding environment based on the user selection,adjust the field of view of the at least one camera such that the document is within the adjusted field of view,recognize, using the optical character or image recognition processing data, at least one of a character or an image of the document, anddetermine output data based on the at least one of the character or the image of the document; anda speaker configured to provide audio information to the user based on the output data. 2. The wearable neck device of claim 1, wherein the at least one camera includes a first camera connected to the left portion of the band and a second camera connected to the right portion of the band, the first camera and the second camera forming a pair of stereo cameras configured to operate in conjunction with each other to detect the image data. 3. The wearable neck device of claim 1, wherein the at least one camera includes at least one of an omni-directional camera or a wide-angle camera having a field of view of at least 120 degrees. 4. The wearable neck device of claim 3, wherein the processor is configured to adjust the field of view of the at least one camera to be focused on the document by performing optical character or image recognition processing on a portion of the image data that includes the document. 5. The wearable neck device of claim 1, wherein the memory and the processor are positioned within the band. 6. The wearable neck device of claim 1, wherein the processor is further configured to determine whether the at least one camera is obstructed and to control the speaker to output audio alert information to the user for alerting the user that the at least one camera is obstructed in response to determining that the at least one camera is obstructed. 7. The wearable neck device of claim 1, wherein the processor is further configured to re-adjust the field of view of the at least one camera when a portion of the document exits the field of view of the at least one camera. 8. The wearable neck device of claim 1, further comprising a first vibratory motor connected to the left portion of the band and a second vibratory motor coupled to the right portion of the band, the first vibratory motor and the second vibratory motor being configured to provide stereo vibration data based on the output data. 9. The wearable neck device of claim 1, further comprising a mechanical rotating device connected with the at least one camera, wherein the processor is configured to adjust the field of view of the at least one camera to be focused on the document by controlling the mechanical rotating device to rotate the at least one camera in a direction towards the document. 10. The wearable neck device of claim 1, further comprising a wireless communication antenna for establishing a wireless connection with another portable electronic device or computer having a second processor, wherein the processor of the wearable neck device and the second processor of the other portable electronic device or computer are configured to operate in conjunction with each other to analyze the image data based on the optical character or image recognition processing data. 11. The wearable neck device of claim 1, further comprising a microphone coupled to the processor and configured to detect a speech of the user, wherein the processor is further configured to detect the speech in response to input from the user. 12. A method for providing optical character or image recognition information to a user of a wearable neck device having at least one camera with a field of view, the method comprising: storing, in a memory, optical character or image recognition processing data corresponding to an algorithm or a set of instructions for identifying characters or images of documents;detecting, using the at least one camera, image data corresponding to a surrounding environment of the user;detecting, using an inertial measurement unit (IMU), IMU data that includes a body posture of the user;determining, using a processor connected to the memory and the at least one camera, a location of the user based on the image data;detecting, using the processor connected to the memory and the IMU, a body posture of the user based on the IMU data;detecting, using the processor, a plurality of documents in the surrounding environment of the user based on the location of the user;determining, using the processor, that the plurality of documents are of interest to the user based on the body posture and the location of the user relative to a location of the plurality of documents in the surrounding environment of the user, and in response:recognizing, using the processor, a document among the plurality of documents that are of interest and in the surrounding environment of the user;providing, using the processor, a description of the document to the user;selecting, using the processor, the document that is among the plurality of documents that are of interest and in the surrounding environment of the user based on a user selection;adjusting, using the processor, the field of view of the at least one camera such that the document is within the adjusted field of view;recognizing, using the optical character or image recognition processing data, at least one of a character or an image of the document;determining, using the processor, output data based on the at least one of the character or the image of the document; andoutputting, using a speaker, audio information to the user based on the output data. 13. The method of claim 12, wherein the at least one camera includes a first camera connected to a left portion of the wearable neck device, and a second camera connected to a right portion of the wearable neck device, the first camera and the second camera forming a pair of stereo cameras configured to operate in conjunction with each other to detect the image data. 14. The method of claim 13, wherein the wearable neck device includes at least one mechanical rotating device connected with the pair of stereo cameras, wherein adjusting, using the processor, the field of view of the at least one camera includes controlling the at least one mechanical rotating device to rotate the pair of stereo cameras in a direction towards the document. 15. The method of claim 13, further comprising re-adjusting, using the processor, the field of view of the pair of stereo cameras when a portion of the document exits the field of view. 16. The method of claim 13, further comprising: providing, using a vibratory motor, vibration feedback to the user based on the output data. 17. A neck worn device for assisting a user having visual impairment, comprising: a housing having a left end, a right end and a center portion positioned between the left end and the right end;an inertial measurement unit (IMU) that is configured to detect IMU data including a body posture of the user;a left side camera mounted proximal to the left end and configured to detect image data;a right side camera mounted proximal to the right end and configured to detect image data, the left side camera and the right side camera forming a pair of stereo cameras;a memory coupled to the housing and configured to store an optical character recognition software program;a processor positioned within the housing and coupled to the IMU, the left side camera, the right side camera and the memory and configured to: determine, using the image data detected by the right side and the left side cameras, a location of the user,determine, using the IMU data, the body posture of the user,detect a plurality of documents in a surrounding environment of the neck worn device based on the location of the user,determine, using the image data detected by the right side and the left side cameras and the IMU data, that the plurality of documents are of interest to the user based on the body posture of the user and the location of the user relative to a location of the plurality of documents in the surrounding environment, and in response:recognize a document that is among the plurality of documents that are of interest and in the surrounding environment,provide a description of the document that identifies the document to the user,receive a user selection of the document that is among the plurality of documents that are of interest and in the surrounding environment,select the document that is among the plurality of documents in the environment based on user selection,adjust a field of view of at least one of the left side camera or the right side camera such that the document is within the field of view,identify characters on the document using the optical character recognition software program, andgenerate a feedback signal based on the identified characters; anda speaker configured to provide audio information based on the feedback signal. 18. The neck worn device of claim 17 further comprising a wireless communication antenna configured to establish data communications with another portable electronic device or computer. 19. The neck worn device of claim 17 further comprising a mechanical rotating device connected with the at least one of the left side camera or the right side camera, wherein the processor is configured to control the mechanical rotating device to rotate the at least one of the left side camera or the right side camera to adjust the field of view of the at least one of the left side camera or the right side camera to be focused towards the document. 20. The neck worn device of claim 17 wherein the processor is configured to re-adjust the field of view of the at least one of the left side camera or the right side camera when a portion of the document exits the field of view.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (172)
Chao, Hui; Das, Saumitra Mohan; Gupta, Rajarshi; Khorashadi, Behrooz; Sridhara, Vinay; Pakzad, Payam, Adaptive updating of indoor navigation assistance data for use by a mobile device.
Lynt Ingrid H. (7502 Toll Ct. Alexandria VA 22306) Lynt Christopher H. (7502 Toll Ct. Alexandria VA 22306), Apparatus for converting visual images into tactile representations for use by a person who is visually impaired.
Albertson, Jacob C.; Arnold, Kenneth C.; Goldman, Steven D.; Paolini, Michael A.; Sessa, Anthony J., Assisting a vision-impaired user with navigation based on a 3D captured image stream.
Janardhanan, Jayawardan; Dutta, Goutam; Tripuraneni, Varun, Attitude estimation for pedestrian navigation using low cost mems accelerometer in mobile applications, and processing methods, apparatus and systems.
Sawan,Mohamad; Harvey,Jean Fran챌ois; Roy,Martin; Coulombe,Jonathan; Savaria,Yvon; Donfack,Colince, Body electronic implant and artificial vision system thereof.
Kramer James P. (Stanford CA) Lindener Peter (E. Palo Alto CA) George William R. (Palo Alto CA), Communication system for deaf, deaf-blind, or non-vocal individuals using instrumented glove.
Kurzweil, Raymond C.; Albrecht, Paul; Gashel, James; Gibson, Lucy; Lvovsky, Lev, Gesture processing with low resolution images with high resolution processing for optical character recognition for a reading machine.
Hanson Charles M. (Richardson TX) Koester Vaughn J. (Dallas TX) Fallstrom Robert D. (Richardson TX), Head mounted video display and remote camera system.
Strub, Henry B.; Burgess, David A.; Johnson, Kimberly H.; Cohen, Jonathan R.; Reed, David P., Hybrid recording unit including portable video recorder and auxillary device.
Hirsch Hermann (Hirschstrasse 5 A-9021 Klagenfurt (Karnten) ATX) Pichler Heinrich (Sailerackergasse 38/2 A-1190 Wien (Osterreich) ATX), Information system.
Wellner Pierre D.,GBX ; Flynn Michael J.,GBX ; Carter Kathleen A.,GBX ; Newman William M.,GBX, Interactive desktop display system for automatically adjusting pan and zoom functions in response to user adjustment of a feedback image.
Kretsch Mary J. (Vallejo CA) Gunn Moira A. (San Francisco CA) Fong Alice K. (San Francisco CA), Method and system for measurement of intake of foods, nutrients and other food components in the diet.
Stanford Thomas H. (Escondido CA) Sahne Farhad Noroozi (San Diego CA) Riches Thomas P. (Temecula CA) O\Neill Robert (San Diego CA), Neck engageable transducer support assembly and method of using same.
Jung,Kyung Kwon; Chae,Yeon Sik; Rhee,Jin Koo, Object identification system combined with millimeter-wave passive image system and global positioning system (GPS) for the blind.
Holakovszky Lszl (Beregszsz u.4o/I. Budapest ; 1112 HUX) Endrei Kroly (Fehryri t 86. Budapest 1119 HUX) Kezi Lszl (Zugligeti t 69. Budapest 1121 HUX) Endrei Krolyn (Trogatt 55. Budapest 1021 HUX), Stereoscopic video image display appliance wearable on head like spectacles.
Dieberger, Andreas, System and method for non-visually presenting multi-part information pages using a combination of sonifications and tactile feedback.
Naick, Indran; Spinac, Clifford J.; Sze, Calvin L., Using a display associated with an imaging device to provide instructions to the subjects being recorded.
Lipton Lenny (San Rafael CA) Halnon Jeffrey J. (Richmond CA) Mitchell Larry H. (Cupertino CA) Hursey Robert (Carmel Valley CA), Wireless active eyewear for stereoscopic applications.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.