IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0139265
(2002-05-02)
|
발명자
/ 주소 |
- King,Charles J.
- Muta,Hidemasa
- Schwerdtfeger,Richard Scott
- Snow Weaver,Andrea
|
출원인 / 주소 |
- International Business Machines Corporation
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
5 인용 특허 :
9 |
초록
▼
A described computer network includes a first computer system and a second computer system. The first computer system transmits screen image information and corresponding speech information to the second computer system. The screen image information includes information corresponding to a screen ima
A described computer network includes a first computer system and a second computer system. The first computer system transmits screen image information and corresponding speech information to the second computer system. The screen image information includes information corresponding to a screen image intended for display within the first computer system. The speech information conveys a verbal description of the screen image. When the screen image includes one or more objects (e.g. , menus, dialog boxes, icons, and the like) having corresponding semantic information, the speech information includes the corresponding semantic information. The second computer system responds to the speech information by producing an output (e.g., human speech via an audio output device, a tactile output via a Braille output device, and the like) . The semantic information conveyed by the output allows a visually-impaired user of the second computer system to know intended purposes of the objects. The second computer system may also receive user input, generate an input signal corresponding to the user input, and transmit the input signal to the first computer system. The first computer system may respond to the input signal by updating the screen image. The semantic information conveyed by the output enables the visually-impaired user to properly interact with the first computer system.
대표청구항
▼
What is claimed is: 1. A computer network, comprising: a first computer system designed to interact with a visually impaired user and configured to transmit screen image information and corresponding speech information to another computer system, wherein the screen image information includes inform
What is claimed is: 1. A computer network, comprising: a first computer system designed to interact with a visually impaired user and configured to transmit screen image information and corresponding speech information to another computer system, wherein the screen image information includes information corresponding to a screen image intended for display within the first computer system, and wherein the speech information conveys a verbal description of the screen image, and wherein in the event the screen image includes an object having corresponding semantic information, the speech information includes the semantic information; and a second computer system in communication with the first computer system, wherein the second computer system is configured to receive user input from a user of the second computer system, to generate an input signal corresponding to the user input, and to transmit the input signal to the first computer system, and wherein in response to the user input the first computer system transmits updated screen image information and corresponding speech information; wherein the visually impaired user interacts with the first computer system through the use of the second computer system. 2. The computer network as recited in claim 1, wherein the second computer system is configured to receive the speech information, and to respond to the received speech information by producing an output, and wherein in the event the screen image includes an object having corresponding semantic information, the output conveys to a user the semantic information corresponding to the object. 3. The computer network as recited in claim 2, wherein in the event the screen image includes an object having corresponding semantic information, the output produced by the second computer system conveys to a visually-impaired user information concerning an intended purpose of the object. 4. The computer network as recited in claim 2, wherein the second computer system is configured to respond to the received speech information by producing human speech conveying the semantic information. 5. The computer network as recited in claim 2, wherein the second computer system is configured to respond to the received speech information by producing a tactile output conveying the semantic information. 6. The computer network as recited in claim 1, wherein in the event a user of the second computer system is visually impaired, and in the event the screen image includes an object having corresponding semantic information, the speech information including the semantic information transmitted from the first computer system to the second computer system enables the visually-impaired user to properly interact with the first computer system. 7. The computer network as recited in claim 1, wherein the second computer system comprises a display screen, and wherein the second computer system is configured to receive the screen image information, and to respond to the received screen image information by displaying the screen image on the display screen. 8. The computer network as recited in claim 1, wherein in the event the screen image includes an object having corresponding semantic information, the semantic information conveys an intended purpose of the object. 9. The computer network as recited in claim 1, wherein objects having corresponding semantic information include menus, dialog boxes, and icons. 10. The computer network as recited in claim 1, wherein the screen image information comprises a bit map of the screen image. 11. The computer network as recited in claim 1, wherein in the event the screen image includes an object having corresponding semantic information and comprising text, the speech information includes the semantic information and the text. 12. A computer network, comprising: a first computer system designed to interact with a visually impaired user and configured to: transmit screen image information and corresponding speech information, wherein the screen image information includes information corresponding to a screen image intended for display within the first computer system, and wherein the speech information conveys a verbal description of the screen image, and wherein in the event the screen image includes an object having corresponding semantic information, the speech information includes the semantic information; receive an input signal, and respond to the input signal by updating the screen image; a second computer system configured to: receive user input from a user of the second computer system; generate the input signal dependent upon the user input; transmit the input signal to the first computer system; and receive the speech information, and respond to the received speech information by producing an output, wherein in the event the screen image includes an object having corresponding semantic information, the output conveys the semantic information; wherein the visually impaired user interacts with the first computer system through the use of the second computer system. 13. The computer network as recited in claim 12, wherein in the event the user of the second computer system is visually impaired and the screen image includes an object having corresponding semantic information, the semantic information conveyed by the output enables the visually-impaired user to properly interact with the first computer system. 14. The computer network as recited in claim 12, wherein the second computer system is configured to respond to the received speech information by producing human speech conveying the semantic information. 15. The computer network as recited in claim 12, wherein the second computer system is configured to respond to the received speech information by producing a tactile output conveying the semantic information. 16. The computer network as recited in claim 12, wherein the second computer system comprises a display screen, and wherein the second computer system is configured to receive the screen image information, and to respond to the received screen image information by displaying the screen image on the display screen. 17. A first computer system, comprising: a distributed console access application configured to receive screen image information from a second computer system designed to interact with a visually impaired user, wherein the screen image information includes information corresponding to a screen image intended for display within the second computer system; a speech information receiver configured to receive speech information, corresponding to the screen image information, from the second computer system, wherein the speech information conveys a verbal description of the screen image; and an output device coupled to receive audio output signals and configured to produce an output, wherein the audio output signals are indicative of the speech information, and wherein the output conveys a description of the screen image; wherein in the event that screen image includes an object having corresponding semantic information, the speech information includes the semantic information, and the output conveys the semantic information; wherein the first computer system is configured to receive user input from a user of the first computer system, to generate an input signal corresponding to the user input, and to transmit the input signal to the second computer system, and wherein in response to the user input the second computer system transmits updated screen image information and corresponding speech information; and wherein the visually impaired user interacts with the second computer system through the use of the first computer system. 18. The computer system as recited in claim 17, wherein the distributed console access application is coupled to receive the input signal, and configured to transmit the input signal to the second computer system. 19. The computer system as recited in claim 17, wherein the output device comprises an audio output device producing human speech that conveys a verbal description of the screen image. 20. The computer system as recited in claim 17, wherein the output device comprises a Braille output device producing a tactile output that conveys the description of the screen image. 21. A method for conveying speech information from a first computer system to a second computer system, wherein the first computer system is designed to interact with a visually impaired user, comprising: receiving speech information corresponding to screen image information, wherein the screen image information includes information corresponding to a screen image intended for display within the first computer system, and wherein the speech information conveys a verbal description of the screen image; transmitting the speech information to the second computer system; receiving user input from a user of the second computer system; generating an input signal corresponding to the user input by the second computer system; transmitting the input signal to the first computer system; wherein in the event the screen image includes an object having corresponding semantic information, the speech information includes the semantic information; wherein in response to the user input, transmitting updated screen image information and corresponding speech information by the first computer system; and wherein the visually impaired user interacts with the first computer system through the use of the second computer system. 22. A method for producing an output within a first computer system, comprising: receiving speech information corresponding to screen image information from a second computer system designed to interact with a visually impaired user, wherein the screen image information includes information corresponding to a screen image intended for display within the second computer system, and wherein the speech information conveys a verbal description of the screen image; and providing the speech information to an output device of the first computer system receiving user input from a user of the first computer system; generating an input signal corresponding to the user input by the first computer system; transmitting the input signal to the second computer system; wherein in the event the screen image includes an object having corresponding semantic information, the speech information includes the semantic information; wherein in response to the user input, transmitting updated screen image information and corresponding speech information by the second computer system; and wherein the visually impaired user interacts with the second computer system through the use of the first computer system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.