IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0381733
(2001-09-26)
|
등록번호 |
US-7467186
(2008-12-16)
|
우선권정보 |
FR-00 12284(2000-09-27) |
국제출원번호 |
PCT/FR01/002977
(2001-09-26)
|
§371/§102 date |
20030821
(20030821)
|
국제공개번호 |
WO02/027567
(2002-04-04)
|
발명자
/ 주소 |
- Attar,Oussama
- Blaise,Marc
- DeLafosse,Marc
- Delafosse,Michel
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
3 인용 특허 :
9 |
초록
▼
The invention concerns an interactive method for communicating data to users (1) of a communication network (3). Each user (1) is provided with a computer equipment (2) connected to the electronic communication network (3). The method uses at least a virtual object (6). Said method comprises the fo
The invention concerns an interactive method for communicating data to users (1) of a communication network (3). Each user (1) is provided with a computer equipment (2) connected to the electronic communication network (3). The method uses at least a virtual object (6). Said method comprises the following steps which consist in: distributing, via the communication network (3), data enabling the computer equipment (2) to display fixed and/or animated images (10) and/or to compute screen pages (10) and display them; distributing, via the communication network (3), data enabling the computer equipment (2) to compute the virtual object (6) and display it by overstriking on the images and/or the screen pages (10); real remote control of the virtual object (6) and simultaneous animation thereof by an operator (7) independent relative to the images and/or the screen pages (10)
대표청구항
▼
The invention claimed is: 1. Interactive method of communicating information to users over a communications network, each user having a computer connected to said communications network, comprising the steps of: broadcasting, via said communications network, data enabling said computers to display
The invention claimed is: 1. Interactive method of communicating information to users over a communications network, each user having a computer connected to said communications network, comprising the steps of: broadcasting, via said communications network, data enabling said computers to display or process images to be viewed by users, said images comprising at least one of the following: still images, animated images or screen pages; broadcasting, via said communications network, data enabling said computers to process and display at least one virtual object superimposed on said images; animating said at least one virtual object on a computer associated with a user by a human operator; remotely controlling and animating said at least one virtual object in real-time, simultaneously and independently from said images by said human operator to provide said at least one virtual object with graphic and vocal expressions including animation of lips and eyes, intonations, inflections, reactions of said at least one virtual object to actions, needs, and expectations of said user, thereby providing personalized interactions with said user; and selectively assisting said human operator by artificial intelligence which can replace said human operator entirely or partially, said artificial intelligence controlling actions of said at least one virtual object in a predetermined manner according to pre-established scenarios and analyzing the actions of said user by means of a voice recognition module. 2. The method of claim 1, further comprising the steps of: capturing an audiovisual sequence of said user; and transmitting said audiovisual sequence to said operator via said communications network, thereby enabling said human operator to observe said user and analyze user's behavior. 3. The method of claim 1, further comprising the steps of: capturing questions asked by said user; and transmitting to said operator said questions asked by said user via said communications network, thereby enabling said human operator to hear said user and analyze user's behavior. 4. The method of claim 1, further comprising the steps of: capturing voice data issued by said human operator and intended for said user; and transmitting to said user said voice data issued by said human operator via said, communications network, thereby enabling said user to listen to information supplied by said human operator. 5. The method of claim 1, further comprising the step of transmitting a copy of said images to said human operator, via said communications network, thereby enabling said human operator to process said images viewed by said user. 6. The method of claim 1, further comprising the step of connecting said human operator to at least one database containing multimedia information, wherein said multimedia information comprises at least one of the following: text, images, sound, videos, or 3D animation sequences, thereby enabling said human operator to answer user's questions and transmit information to said user. 7. The method of claim 1, further comprising the step of displaying said at least one virtual object in the form of a stylized graphic personage in three dimensions. 8. The method of claim 7, wherein the step of remotely controlling and animating comprises the step of animating said graphic personage to symbolize product, a service, a brand, a business or an association for which said graphic personage performs the service of informing, promoting or selling. 9. The method of claim 7, wherein the step of remotely controlling and animating comprises the step of animating said graphic personage with graphic expressions including animation of the lips and eyes and with vocal expressions including intonations, said graphic and vocal expressions expressing reactions of said graphic personage to actions, needs and expectations of said user. 10. The method of claim 7, wherein the step of remotely controlling and animating shifts said graphic personage on said images, varies size, shape and color of said graphic personage, and provides said graphic personage with gestures including movements of head, arms and legs based on zones of said images and contextual relationship between said user and said human operator on whose behalf said graphic personage acts. 11. The method of claim 10, wherein the step of remotely controlling and animating comprises the step of adding accessories according to the development of said contextual relationship; and wherein said accessories are used by said human operator to shift and vary said graphic personage, and provide said graphic personage with expressions and gestures. 12. The method of claim 10, wherein the step of remotely controlling and animating comprises the step of animating said graphic personage to activate action areas of said images. 13. The method of claim 1, wherein the step of remotely controlling and animating controls and animates said at least one virtual object by said human operator situated in a call center or said human operator's home. 14. An interactive system for communicating data to users over a communications network, comprising; a plurality of computers connected to said communications network, each computer having a control device and being associated with a user; a first server connected to said communications network, said first server distributing data enabling said computers to display or process images, said images comprising at least one of the following: still images, animated images or screen pages; a second server connected to said communications network, said second server distributing data enabling said computers to process at least one virtual object; and a control computer connected to said communications network and associated with said second server; and wherein said at least one virtual object is activated on a computer associated with a user by said human operator of said control computer; wherein each of said computers comprises visualization means for displaying said images and said at least one virtual object superimposed on said images; wherein said control computer is operable by said human operator to remotely control in real-time said at least one virtual object on said computer associated with said user by said human operator; wherein said images and said at least one virtual object are animated simultaneously and independently by said first server and said control computer, respectively, to provide said at least one virtual object with graphic and vocal expressions including animation of lips and eyes, intonations, inflections, reactions of said at least one virtual object to actions, needs, and expectations of said user, thereby providing personalized interactions with said user; and wherein said human operator is selectively assisted by artificial intelligence which can replace said human operator entirely of partially, said artificial intelligence controlling the actions of said at least one virtual object in a predetermined manner according to pre-established scenarios and analyzing the actions of said user by means of a voice recognition module. 15. The system of claim 14, further comprising a camera connected to said computer associated with said user for capturing an audiovisual sequence from said user; and wherein said computer associated wit said user is operable to transmit said audiovisual sequence to said control computer over the said communications network; and wherein said control computer is operable to display said audio visual sequence to enable said human operator to observe said user and analyze user's behavior. 16. The system of claim 14, further comprising a microphone connected to said computer associated with said user for capturing questions asked by said user; and wherein said computer associated said user is operable to transmit said questions to said control computer over said communications network; and wherein said control computer comprises a speaker to enable said human operator to hear said questions. 17. The system of claim 14, further comprising a microphone connected to said control computer for capturing voice data issued by said human operator and intended for said user; and wherein said computer is operable to transmit said voice data to said computer associated with said user over said communications network; and wherein said computer associated with said user comprises a speaker to enable said user to hear said voice data issued by said human operator. 18. The system of claim 14, wherein said computer associated with said user is operable to transmit a copy of said images viewed by said user to said control computer over said communications network; and wherein said control computer is operable to display said images, thereby enabling said human operator to scan and animate said images viewed by said user. 19. The system of claim 14, further comprising at least one database connected to said control computer containing multimedia data, said multimedia data comprising at least one of the following: text, images, sound, videos, or 3D animation sequences, thereby enabling said human operator to respond to questions posed by said user and transmit information to said user. 20. The system of claim 14, wherein at least one virtual object appears in the form of a stylized graphic personage in three dimensions. 21. The system of claim 20, wherein said personage symbolizes a product, a service, a trademark, a business or a collective for which said personage performs the service of informing, promoting or selling. 22. The system of claim 20, wherein said personage is animated wit graphic expressions including animations of the lips and eyes and vocal expressions including intonations; and wherein said graphical and vocal expressions are controlled and produced by said human operator via said control computer, and express reactions of said personage to comportment, needs and expectations of said user. 23. The system of claim 20, wherein said personage controlled by said human operator is mobile on said images, varies in size, shape and colors, and gestures with head, arm and leg movements based on zones of said images and contextual relationships between said user and said human operator on whose behalf said personage acts. 24. The system of claim 23, wherein said personage comprises accessories for providing expressions and gestures to said personage, and moving and shifting said personage by said human operator according to the development of said contextual relationship. 25. The system of claim 24, wherein said personage comprises activation means to activate zones of said images by said human operator. 26. The system of claim 14, wherein said control computer is situated in a call center said human operator's home.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.