IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0963324
(2010-12-08)
|
등록번호 |
US-8446455
(2013-05-21)
|
발명자
/ 주소 |
- Lian, TiongHu
- Dunn, Chris A.
- Baldino, Brian J.
|
출원인 / 주소 |
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
7 인용 특허 :
61 |
초록
▼
A method is provided in one example embodiment and includes monitoring a plurality of inputs associated with end users involved in a video session in which a plurality of displays are used. At least one of the inputs is associated with a frequency of speech of the end users. The method also includes
A method is provided in one example embodiment and includes monitoring a plurality of inputs associated with end users involved in a video session in which a plurality of displays are used. At least one of the inputs is associated with a frequency of speech of the end users. The method also includes determining a participation level for each of the end users based on the inputs, and determining which image data associated with the end users is to be rendered on a selected one of the plurality of displays based on the participation levels.
대표청구항
▼
1. A method, comprising: monitoring a plurality of end users, wherein each user generates a plurality of inputs and the end users are involved in a video session in which a plurality of displays are used, wherein at least one of the inputs is associated with a frequency of speech of the end users;de
1. A method, comprising: monitoring a plurality of end users, wherein each user generates a plurality of inputs and the end users are involved in a video session in which a plurality of displays are used, wherein at least one of the inputs is associated with a frequency of speech of the end users;determining a participation level for each of the end users based on the inputs; anddetermining which image data associated with the end users is to be rendered on a selected one of the plurality of displays based on the participation levels. 2. The method of claim 1, wherein the determining of the participation level includes calculating numeric values associated with the participation levels, and wherein the numeric values are used to assign virtual positions for the end users during the video session. 3. The method of claim 1, wherein one of the participation levels of the end users is preconfigured to have a base level that is higher than the participation levels of the other end users, and wherein the base level is set in accordance with an identity of one of the end users. 4. The method of claim 1, wherein the inputs include body language characteristics and eye gaze metrics of the end users. 5. The method of claim 1, wherein the inputs include a volume of speech associated with the end users. 6. The method of claim 1, wherein the inputs are weighted differently in order to determine the participation levels for each of the end users. 7. The method of claim 1, wherein the participation levels can be adjusted by an administrator during the video session. 8. Logic encoded in non-transitory media that includes code for execution and when executed by a processor operable to perform operations comprising: monitoring a plurality of end users, wherein each user generates a plurality of inputs and the end users are involved in a video session in which a plurality of displays are used, wherein at least one of the inputs is associated with a frequency of speech of the end users;determining a participation level for each of the end users based on the inputs; anddetermining which image data associated with the end users is to be rendered on a selected one of the plurality of displays based on the participation levels. 9. The logic of claim 8, wherein the determining of the participation level includes calculating numeric values associated with the participation levels, and wherein the numeric values are used to assign virtual positions for the end users during the video session. 10. The logic of claim 8, wherein one of the participation levels of the end users is preconfigured to have a base level that is higher than the participation levels of the other end users, and wherein the base level is set in accordance with an identity of one of the end users. 11. The logic of claim 8, wherein a higher participation level for a selected one of the end users is prioritized over a lower participation level for another one of the end users. 12. The logic of claim 8, wherein the inputs include body language characteristics and eye gaze metrics of the end users. 13. The logic of claim 8, wherein the inputs include a volume of speech associated with the end users. 14. An apparatus, comprising: a memory element configured to store data;a participation module; anda processor operable to execute instructions associated with the data, wherein the processor, the participation module, and the memory element cooperate in order to: monitoring a plurality of end users, wherein each user generates a plurality of inputs and the end users are involved in a video session in which a plurality of displays are used, wherein at least one of the inputs is associated with a frequency of speech of the end users;determine a participation level for each of the end users based on the inputs; anddetermine which image data associated with the end users is to be rendered on a selected one of the plurality of displays based on the participation levels. 15. The apparatus of claim 14, wherein the determining of the participation level includes calculating numeric values associated with the participation levels, and wherein the numeric values are used to assign virtual positions for the end users during the video session. 16. The apparatus of claim 14, wherein one of the participation levels of the end users is preconfigured to have a base level that is higher than the participation levels of the other end users, and wherein the base level is set in accordance with an identity of one of the end users. 17. The apparatus of claim 14, wherein the inputs include body language characteristics and eye gaze metrics of the end users. 18. The apparatus of claim 14, wherein the inputs include a volume of speech associated with the end users. 19. The apparatus of claim 14, wherein the inputs are weighted differently in order to determine the participation levels for each of the end users. 20. The apparatus of claim 14, wherein the participation levels can be adjusted by an administrator during the video session.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.