Three-dimensional computer model data, moving image data or still image data showing at least one person is stored in an archive database 126, 703, 850, 1303, together with additional information to improve the searching and retrieval of data therefrom. The additional information includes view param
Three-dimensional computer model data, moving image data or still image data showing at least one person is stored in an archive database 126, 703, 850, 1303, together with additional information to improve the searching and retrieval of data therefrom. The additional information includes view parameter data 512, 1040 which defines at whom or what each person is looking during each predetermined period of time or image. Text data 504, 1020 which comprises words associated with the person, and viewing histogram data 540 which, for each period of text data defines the percentage of time that the speaking person spent looking at each other person or object, may also be stored.
대표청구항▼
The invention claimed is: 1. Apparatus for generating a database structure, comprising a memory for storing data and a processor operable to generate in the memory a database structure comprising: an image data file having a plurality of image data storage areas each arranged to store an item of im
The invention claimed is: 1. Apparatus for generating a database structure, comprising a memory for storing data and a processor operable to generate in the memory a database structure comprising: an image data file having a plurality of image data storage areas each arranged to store an item of image data; a participants data file arranged to store data identifying participants shown in image data stored in the image data file; and a viewing data file having a plurality of viewing data storage areas each associated with one of the image data storage areas and being arranged to store data relating to the direction in which a participant shown in an item of image data stored in the corresponding image data storage area is looking. 2. Apparatus according to claim 1, wherein the processor is operable to generate in the database structure a plurality of viewing data files each arranged to be associated with a particular different one of participants shown in the image data and each having a plurality of viewing data storage areas each associated with an image data storage area and each being arranged to store data indicating which, if any, of the other participants the participant associated with that viewing data storage area is looking at in an item of image data stored in the corresponding image data storage area. 3. Apparatus according to claim 1, wherein the participants are people. 4. Apparatus according to claim 1, wherein the processor is operable to generate the database structure such that the database structure also comprises an audio file having a plurality of audio data storage areas each associated with one of the image data storage areas with each audio data storage area being arranged to store data relating to audio data associated with the corresponding image data storage area. 5. Apparatus according to claim 1, wherein the processor is operable to generate the database structure such that the database structure also comprises a plurality of audio files each arranged to be associated with a respective different participant and each having a plurality of audio data storage areas each associated with one of the image data storage areas and each audio data storage area being arranged to store data relating to audio data associated with the corresponding image data storage area. 6. Apparatus according to claim 1, wherein the processor is operable to generate the database structure such that the database structure also comprises a speech file having a plurality of speech data storage areas each associated with one of the image data storage areas with each speech data storage area being arranged to store data relating to words spoken by or associated with a participant in an item of image data stored in the corresponding image data storage area. 7. Apparatus according to claim 6, wherein the processor is operable to generate the database structure such that the database structure also comprises at least one viewing proportion file arranged to store data relating to the amount of time an associated participant looks at each of a plurality of other participants while that participant is speaking. 8. Apparatus according to claim 6, wherein the processor is operable to generate the database structure such that the speech is arranged to be stored as text. 9. Apparatus according to claim 1, wherein the processor is operable to generate the database structure such that the database structure also comprises a plurality of speech files each associated with a particular different participant and each speech file having a plurality of speech data storage areas each associated with one of the image data storage areas and being arranged to store data relating to words spoken by or associated with that participant in relation to an image stored in that image data storage area. 10. Apparatus according to claim 1, wherein the processor is operable to generate the database structure such that the database structure also comprises a speech file having a plurality of speech data storage areas each associated with one of the image data storage areas with each speech data storage area being arranged to store data relating to words spoken by or associated with a participant in an item of image data stored in the corresponding image data storage area and to generate the viewing data file such that each viewing data storage area is arranged to store data indicating which, if any, of other participants in an item of image data stored in the corresponding image data storage area is being looked at by the participant when the participant associated with the viewing data file is speaking. 11. Apparatus according to claim 10, wherein the processor is operable to generate a viewing data file for each of a plurality of participants shown in the image data with each viewing data storage area of each viewing data file being arranged to store data indicating which, if any, of other participants in an item of image data stored in the corresponding image data storage area is being looked at by the participant associated with that viewing data file when that participant is speaking. 12. Apparatus according to claim 1, wherein the processor is operable to generate the database structure such that each image data storage area is arranged to store at least one frame of image data. 13. Apparatus according to claim 1, wherein the image data file is arranged to store image data having sound data associated therewith. 14. Apparatus for generating a database, comprising a memory and a processor operable to generate in the memory a database comprising: an image data file having a plurality of image data storage areas each storing an item of image data; a participants data file storing data identifying participants shown in the image data stored in the image data file; and a viewing data file having a plurality of viewing data storage areas each associated with one of the image data storage areas and storing data identifying the direction in which a participant shown in the item of image data stored in the corresponding image data storage area is looking. 15. A computer-readable storage medium encoded with computer-readable data defining a database structure comprising: an image data file having a plurality of image data storage areas each arranged to store an item of image data; a participants data file arranged to store data identifying participants shown in image data stored in the image data file; and a viewing data file having a plurality of viewing data storage areas each associated with one of the image data storage areas and being arranged to store data identifying the direction in which a participant shown in the image represented by an item of image data stored in the corresponding image data storage area is looking. 16. A storage medium according to claim 15, wherein the computer-readable data defines a database structure further comprising a plurality of viewing data files each arranged to be associated with a particular different one of participants shown in image data and each having a plurality of viewing data storage areas each associated with an image data storage area and each being arranged to store data indicating which, if any, of the other participants the participant associated with that viewing data storage area is looking at in an item of image data stored in the corresponding image data storage area. 17. A storage medium according to claim 15, wherein the computer-readable data defines a database structure further comprising an audio file having a plurality of audio data storage areas each associated with one of the image data storage areas with each audio data storage area being arranged to store data relating to audio data associated with the corresponding image data storage area. 18. A storage medium according to claim 15, wherein the computer-readable data defines a database structure further comprising a plurality of audio files each arranged to be associated with a respective different participant and each having a plurality of audio data storage areas each arranged to be associated with one of the image data storage areas and being arranged to store data relating to sounds issued by or associated with that image data storage area and the corresponding participant. 19. A storage medium according to claim 15, wherein the computer-readable data defines a database structure further comprising a speech file having a plurality of speech data storage areas each associated with one of the image data storage areas with each speech data storage area being arranged to store data relating to words spoken by or associated with a participant in an item of image data stored in the corresponding image data storage area. 20. A storage medium according to claim 15, wherein the computer-readable data defines a database structure further comprising a plurality of speech files each associated with a particular different participant and each speech file having a plurality of speech data storage areas each associated with one of the image storage areas and being arranged to store data identifying words spoken by or associated with that participant in relation to an image stored in that image data storage area. 21. A storage medium according to claim 20, wherein the computer-readable data defines a database structure further comprising at least one viewing proportion file arranged to store data relating to the amount of time an associated participant looks at each of a plurality of other participants while speaking. 22. A storage medium according to claim 20, wherein the computer-readable data defines a database structure arranged to store speech as text. 23. A storage medium according to claim 15, wherein the computer-readable data defines a database structure further comprising a speech file having a plurality of speech data storage areas each associated with one of the image data storage areas with each speech data storage area being arranged to store data relating to words spoken by or associated with a participant in an item of image data stored in the corresponding image data storage area and wherein each viewing data storage area is arranged to store data indicating which, if any, of other participants in an item of image data stored in the corresponding image data storage area is being looked at by the participant when the participant associated with the viewing data file is speaking. 24. A storage medium according to claim 23, wherein the computer-readable data defines a database structure comprising a viewing data file for each of a plurality of participants shown in the image data with each viewing data storage area of each viewing data file being arranged to store data indicating which, if any, of other participants in an item of image data stored in the corresponding image data storage area is being looked at by the participant associated with that viewing data file when that participant is speaking. 25. A storage medium according to claim 15, wherein the computer-readable data defines a database structure wherein the image data file is arranged to store image data having sound data associated therewith. 26. A computer-readable storage medium encoded with computer-readable data defining a database comprising: an image data file having a plurality of image data storage areas each storing an item of image data; a participants data file storing data identifying participants shown in the image data stored in the image data file; and a viewing data file having a plurality of viewing data storage areas each associated with one of the image data storage areas and storing data identifying the direction in which a participant shown in the image represented by the item of image data stored in the corresponding image data storage area is looking. 27. A storage medium according to claim 26, wherein the computer-readable data defines a database having a plurality of viewing data files each associated with a different participant shown in the image data and each having a plurality of viewing data storage areas each associated with an image data storage area and each storing data indicating which, if any, of the other participants the participant associated with that viewing data storage area is looking at in an item of image data stored in the corresponding image data storage area. 28. A storage medium according to claim 26, wherein the computer-readable data defines a database further comprising an audio file having a plurality of audio data storage areas each associated with one of the image data storage areas with each audio data storage area storing data relating to audio data associated with the corresponding image data storage area. 29. A storage medium according to claim 26, wherein the computer-readable data defines a database further comprising a plurality of audio files each associated with a respective different participant and each having a plurality of audio data storage areas each associated with one of the image data storage areas and storing data relating to sounds issued by or associated with the item of image data stored in that image storage area and the corresponding participant. 30. A storage medium according to claim 26, wherein the computer-readable data defines a database further comprising a speech file having a plurality of speech data storage areas each associated with one of the image data storage areas with each speech data storage area storing data relating to words spoken by or associated with a participant in an item of image data stored in the corresponding image data storage area. 31. A storage medium according to claim 26, wherein the computer-readable data defines a database further comprising a plurality of speech files each associated with a particular different participant and each speech file having a plurality of speech data storage areas each associated with one of the image storage areas and storing data relating to words spoken by or associated with that participant in relation to an image stored in that image data storage area. 32. A storage medium according to claim 31, wherein the computer-readable data defines a database further comprising at least one viewing proportion file storing data relating to the amount of time an associated participant looks at each of a plurality of other participants while speaking. 33. A storage medium according to claim 31, wherein the computer-readable data defines a database in which speech is stored as text. 34. A storage medium according to claim 26, wherein the computer-readable data defines a database also comprising a speech file having a plurality of speech data storage areas each associated with one of the image data storage areas with each speech data storage area storing data relating to words spoken by or associated with a participant in an item of image data stored in the corresponding image data storage area and wherein each viewing data storage area stores data indicating which, if any, of other participants in an item of image data stored in the corresponding image data storage area is being looked at by the participant when the participant associated with the viewing data file is speaking. 35. A storage medium according to claim 34, wherein the computer-readable data defines a database having a viewing data file for each of a plurality of participants shown in the image data with each viewing data storage area of each viewing data file storing data indicating which, if any, of other participants in an item of image data stored in the corresponding image data storage area is being looked at by the participant associated with that viewing data file when that participant is speaking. 36. A storage medium according to claim 26, wherein the computer-readable data defines a database wherein the image data file stores image data having sound data associated therewith. 37. A storage medium according to claim 26, wherein the computer-readable data defines a database wherein the image data file stores 3D computer model data. 38. A storage medium according to claim 26, wherein the computer-readable data defines a database wherein the image data file stores video data. 39. A storage medium according to claim 26, wherein the computer-readable data defines a database wherein the image data file stores still image data. 40. A method of generating a database, comprising the steps of causing a processor to generate in a memory a database comprising: an image data file having a plurality of image data storage areas each storing an item of image data; a participants data file storing data identifying participants shown in the image data stored in the image data file; and a viewing data file having a plurality of viewing data storage areas each associated with one of the image data storage areas and storing data identifying the direction in which a participant shown in the image represented by the item of image data stored in the corresponding image data storage area is looking. 41. Apparatus for searching a database storing a plurality of images, data identifying participants shown in the images, and, for each participant in each image, viewing data identifying the subject at which the participant is looking, the apparatus comprising: a receiver operable to receive a first search parameter identifying a first participant and a second search parameter identifying a subject the first participant is looking at; a viewing data identifier operable to search the database to identify in the database viewing data associating the subject defined by the second search parameter with the first participant defined by the first search parameter; and an image data identifier operable to identify image data associated with identified viewing data. 42. Apparatus for searching a database storing a plurality of images, data identifying participants shown in the images, viewing data defining, for each participant in each image, the subject at which the participant is looking, and data defining words spoken by or associated with a participant in an image, the apparatus comprising: a receiver operable to receive a first search parameter identifying a first participant and a second search parameter identifying a subject the first participant is looking at; a viewing data identifier operable to search the database to identify in the database viewing data associating the subject defined by the second search parameter with the first participant defined by the first search parameter; an image data identifier operable to identify image data associated with identified viewing data, wherein the receiver is operable to receive a further search parameter defining words spoken by the first participant to the subject; and a speech data identifier operable to search the database to identify in the database speech data containing speech defined by the further search parameter. 43. Apparatus for searching a database storing a plurality of images, data identifying participants shown in the images, viewing data defining, for each participant in each image, the subject at which the participant is looking, data defining words spoken by or associated with a participant in an image, and viewing proportion data relating to the amount of time a participant looks at each subject while speaking, the apparatus comprising: a receiver operable to receive a first search parameter identifying a first participant, a second search parameter identifying a subject the first participant is looking at, and a third search parameter defining words spoken by the first participant to the subject; a viewing data identifier operable to search the database to identify in the database viewing data associating the subject defined by the second search parameter with the first participant defined by the first search parameter; a speech data identifier operable to search the database to identify in the database speech data containing words defined by the third search parameter; an image data identifier operable to identify image data associated with identified viewing data and identified speech data; and a viewing proportion checker operable to check the viewing proportion data for the first participant for the identified speech and for disregarding any identified speeches where the amount of time the first participant looks at the subject is less than a predetermined proportion of the duration of that speech. 44. A method of searching a database storing a plurality of images, data identifying participants shown in the images, and, for each participant in each image, viewing data identifying the subject at which the participant is looking, the method comprising: receiving a first search parameter identifying a first participant and a second search parameter identifying a subject the first participant is looking at; identifying in the database viewing data associating the subject defined by the second search parameter with the first participant defined by the first search parameter; and identifying image data associated with identified viewing data.
Yang Hsiao-Ying,TWX ; Ni Cheng-Yao,TWX ; Yu Chih-Hsing,TWX ; Liu Chih-Chin,TWX ; Chen Arbee L. P.,TWX, Video database indexing and query method and system.
Clanton,Charles H.; Ventrella,Jeffrey J.; Paiz,Fernando J., Cinematic techniques in avatar-centric communication during a multi-user online simulation.
Newton, Philip Steven; Bolio, Dennis Daniel Robert Jozef; Kurvers, Mark Jozef Maria; Van Der Heijden, Gerardus Wilhelmus Theodorus; Bruls, Wilhelmus Hendrikus Alfonsus; De Haan, Wiebe; Talstra, Johan Cornelis, Combining 3D video and auxiliary data that is provided when not reveived.
Marks, Richard L.; Mao, Xiadong; Zalewski, Gary M., Computer image and audio processing of intensity and input devices for interfacing with a computer program.
Larsen, Eric J.; Deshpande, Hrishikesh R; Marks, Richard L., Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion.
Zalewski, Gary M.; Marks, Richard; Mao, Xiadong, Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera.
Zalewski, Gary M.; Marks, Richard; Mao, Xiaodong, Method and apparatus for tracking three-dimensional movements of an object using a depth sensing camera.
Steiner, Travis; Housden, Eric; Klapka, Robbie; Sternberg, Tom; Whitley, Brandon, Methods and systems for enabling control of artificial intelligence game characters.
Garcia, Luis A; Lopez, Jose Luis; Rasillo, Jorge A; Alanis, Francisco J; Garza, Maria; Cantu, Edward O, Multi-participant audio/video communication system with participant role indicator.
Wang, Niniane; Liaw, Joey Chiu-Wen; Mendes Da Costa, Alexander; Tay, Darin; Guymon, III, Vernon Melvin, Portals between multi-dimensional virtual environments.
Venkataswami, Balaji Venkat; Subramanian, Jegan Kumar Somi Ramasamy; Andhavaram, Rajesh Kumar; Sivasubramanian, Girish, System and method for managing flows in a mobile network environment.
Lu, Torence; Knotts, Monica Shen; Desai, Ashok T.; Wales, Richard T.; Berkhout, Jacobus M.; Makay, Michael C., System and method for managing optics in a video environment.
Fornell, Peter A. J.; Mackie, David J.; Li, Wei; Gajendran, Indrajit Rajeev; Lin, Hai; Chow, Chin-Tong, System and method for providing camera functions in a video environment.
Li, Wei; Mauchly, J. William; Mackie, David J.; Williford, II, Olin D.; Huang, Jinshi; Paszkowski, Pawel; Gajendran, Indrajit Rajeev; Wales, Richard T.; Friel, Joseph T., System and method for providing enhanced audio in a video environment.
Kanalakis, Jr., John M.; Bean, Zachary R.; Mackie, David J.; Collins, Eddie; Dyer, Mark David, System and method for providing enhanced graphics in a video environment.
Kanalakis, Jr., John M.; Bean, Zachary R.; Mackie, David J.; Collins, Eddie; Dyer, Mark David, System and method for providing enhanced graphics in a video environment.
Mackie, David J.; Tian, Dihong; Weir, Andrew P.; Buttimer, Maurice; Friel, Joseph T.; Mauchly, J. William; Chen, Wen-Hsiung, System and method for providing enhanced video processing in a network environment.
Venekataswami, Balaji Venkat; Subramanian, Kowsalya; Subramanian, Jegan Kumar Somi Ramasamy; Dhayalan, Manikandan; Andhavaram, Rajesh Kumar; Sivasubramanian, Girish, System and method for provisioning flows in a mobile network environment.
Venkataswami, Balaji Venkat; Subramanian, Kowsalya; Somi Ramasamy Subramanian, Jegan Kumar; Dhayalan, Manikandan; Andhavaram, Rajesh Kumar; Sivasubramanian, Girish, System and method for provisioning flows in a mobile network environment.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.