최소 단어 이상 선택하여야 합니다.
최대 10 단어까지만 선택 가능합니다.
다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
NTIS 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
DataON 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Edison 바로가기다음과 같은 기능을 한번의 로그인으로 사용 할 수 있습니다.
Kafe 바로가기국가/구분 | United States(US) Patent 등록 |
---|---|
국제특허분류(IPC7판) |
|
출원번호 | US-0238330 (1999-01-27) |
발명자 / 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 | 피인용 횟수 : 370 인용 특허 : 32 |
A computer based graphical user interface for facilitating the selection and display of transmitted audio, video and data includes a main menu state with a first multi-segment display having an active video/audio segment and a tuning segment. The active video segment displays a currently tuned progr
A computer based graphical user interface for facilitating the selection and display of transmitted audio, video and data includes a main menu state with a first multi-segment display having an active video/audio segment and a tuning segment. The active video segment displays a currently tuned program and the tuning segment includes an elongated graphic bar. The elongated graphic bar is dynamically sub-divided into a plurality of contiguous regions so that each of the regions uniquely corresponds to a program parsed from a multi-program data stream. The tuning segment also includes a graphic slider that overlays the graphic bar and that is movable along the length of the graphic bar so that the currently tuned program corresponds to the portion of the graphic bar underlying the current position of the graphic slider.
A computer based graphical user interface for facilitating the selection and display of transmitted audio, video and data includes a main menu state with a first multi-segment display having an active video/audio segment and a tuning segment. The active video segment displays a currently tuned progr
A computer based graphical user interface for facilitating the selection and display of transmitted audio, video and data includes a main menu state with a first multi-segment display having an active video/audio segment and a tuning segment. The active video segment displays a currently tuned program and the tuning segment includes an elongated graphic bar. The elongated graphic bar is dynamically sub-divided into a plurality of contiguous regions so that each of the regions uniquely corresponds to a program parsed from a multi-program data stream. The tuning segment also includes a graphic slider that overlays the graphic bar and that is movable along the length of the graphic bar so that the currently tuned program corresponds to the portion of the graphic bar underlying the current position of the graphic slider. nicating behavioral information associated with a sequence of behavioral movements; and animating the visual representation responsive to the sequence of behavioral movements associated with the gesture. 2. The method of claim 1 wherein communicating the data to the recipient concurrently with a behavioral movement comprises animating the visual representation with the behavioral movement. 3. The method of claim 2 wherein the set of behavioral characteristics includes a set of personality types and mood intensity values, wherein each personality type comprises a set of behavioral movements and each mood intensity value comprises a set of behavioral movements, and animating the visual representation comprises: responsive to a user selection of a personality type and mood intensity value, determining a set of behavioral movements comprising the intersection of the set of behavioral movements associated with the selected personality type and the set of behavioral movements associated with the selected mood intensity value; and communicating the data to the recipient concurrently with a behavioral movement further comprises: selecting a behavioral movement from the determined intersection set of behavioral movements; and animating the visual representation responsive to the selected behavioral movement. 4. The method of claim 3 further comprises randomly selecting a behavioral movement from the intersection set of behavioral movements. 5. The method of claim 1 wherein communicating the data to the recipient concurrently with a behavioral movement comprises animating movement of facial components of the visual representation. 6. The method of claim 1 wherein communicating the data to the recipient concurrently with a behavioral movement comprises animating movement of body components of the visual representation. 7. The method of claim 1 wherein communicating the data to the recipient concurrently with a behavioral movement comprises generating sound clips. 8. The method of claim 1 wherein the set of behavioral characteristics includes a set of personality types, and wherein a personality type comprises a predefined set of behavioral movements, and receiving a selection of a behavioral characteristic comprises receiving a selection of a personality type, and communicating the data to the recipient concurrently with a behavioral movement further comprises: selecting a behavioral movement from the set of behavioral movements associated with the selected personality type; and animating the visual representation responsive to the selected behavioral movement. 9. The method of claim 8 wherein the behavioral movement is randomly selected from the set of behavioral movements associated with the selected personality type. 10. The method of claim 1 wherein the set of behavioral characteristics includes a set of mood intensity values, and receiving a selection of a behavioral characteristic further comprises receiving a selection of a mood intensity value, and selecting a behavioral movement from the set of behavioral movements further comprises selecting a behavioral movement from the set of behavioral movements responsive to the selected mood intensity value. 11. The method of claim 10 wherein a mood intensity value specifies a weight for each behavioral movement associated with a personality type, wherein the weight determines a probability of selecting the behavioral movement, and selecting a behavioral movement from the set of behavioral movements further comprises selecting a behavioral movement from the set of behavioral movements responsive to the weight associated with the behavioral movement. 12. The method of claim 10 in a system in which a second user selects at least one mood intensity value for a visual representation representing the second user, and selecting a behavioral movement from the set of behavioral movements further comprises selecting a behavioral movement from the set of behavioral movements resp onsive to the selected mood intensity value of the second user. 13. The method of claim 1 further comprising: receiving an utterance override command comprising a subset of behavioral movements associated with a behavioral characteristic selected by the user; and selecting a behavioral movement within the set of behavioral movements associated with the received utterance override command; and wherein communicating the data to the recipient concurrently with a behavioral movement comprises: animating the visual representation responsive to the utterance override command to communicate the selected behavioral characteristic. 14. The method of claim 13 wherein the utterance override command specifies a mood intensity setting. 15. The method of claim 1 wherein communicating the data to the recipient concurrently with a behavioral movement further comprises: determining content of the data to be communicated; and modifying the behavioral movement of the visual representation responsive to the content of the data to be communicated. 16. The method of claim 15 in which predefined categories of words are associated with behavioral movements, and wherein determining the content of the data to be communicated comprises: determining whether words in the data to be communicated belong to a predefined category; and wherein modifying further comprises: responsive to determining that a word in the data to be communicated belongs to a predefined category animating the visual representation responsive to the behavioral movement associated with the category. 17. The method of claim 15 wherein predefined phrases are associated with at least one behavioral movement, and determining the content further comprises: determining whether at least one predefined phrase is part of the data to be communicated; and responsive to determining that a predefined phrase is part of the data to be communicated, animating the visual representation responsive to the at least one behavioral movement associated with the predefined phase. 18. A method of communicating data from a user to a remote recipient through a remote connection comprising: providing a set of behavioral characteristics of a visual representation to the user, the behavioral characteristics representing contexts within which data is to be interpreted; receiving a selection of a behavioral characteristic from one of the set of behavioral characteristics from the user; receiving data to be communicated from the user to the recipient; communicating the data to the recipient concurrently with a behavioral movement of the visual representation associated with the selected behavioral characteristic, wherein the behavioral movement provides context to the recipient for interpreting the communicated data; receiving an utterance override command comprising a subset of behavioral movements associated with a behavioral characteristic selected by the user; and selecting a behavioral movement within the set of behavioral movements associated with the received utterance override command; wherein communicating the data to the recipient concurrently with a behavioral movement comprises: animating the visual representation responsive to the utterance override command to communicate the selected behavioral characteristic; and wherein the utterance override command specifies a personality type. 19. A method of communicating over a network comprising: receiving a data communication from a first user, wherein the data communication contains behavioral movement information; translating the received behavioral movement information into a choreography sequence of behavioral movements of a figure of the first user by: determining whether the data communication contains gesture commands; and responsive to determining that the data communication contains at least one gesture command, constructing a choreography sequence from at least one behavioral movement associated
Copyright KISTI. All Rights Reserved.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.