IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0762076
(2010-04-16)
|
등록번호 |
US-8600100
(2013-12-03)
|
발명자
/ 주소 |
|
출원인 / 주소 |
|
대리인 / 주소 |
Schwegman Lundberg & Woessner, P.A.
|
인용정보 |
피인용 횟수 :
15 인용 특허 :
56 |
초록
▼
A method of assessing an individual through facial muscle activity and expressions includes receiving a visual recording stored on a computer-readable medium of an individual's non-verbal responses to a stimulus, the non-verbal response comprising facial expressions of the individual. The recording
A method of assessing an individual through facial muscle activity and expressions includes receiving a visual recording stored on a computer-readable medium of an individual's non-verbal responses to a stimulus, the non-verbal response comprising facial expressions of the individual. The recording is accessed to automatically detect and record expressional repositioning of each of a plurality of selected facial features by conducting a computerized comparison of the facial position of each selected facial feature through sequential facial images. The contemporaneously detected and recorded expressional repositionings are automatically coded to an action unit, a combination of action units, and/or at least one emotion. The action unit, combination of action units, and/or at least one emotion are analyzed to assess one or more characteristics of the individual to develop a profile of the individual's personality in relation to the objective for which the individual is being assessed.
대표청구항
▼
1. A method of assessing an individual through facial muscle activity and expressions, the method comprising: (a) receiving a recording stored on a computer-readable medium of an individual's response to a stimulus, the recording including a non-verbal response comprising facial expressions of the i
1. A method of assessing an individual through facial muscle activity and expressions, the method comprising: (a) receiving a recording stored on a computer-readable medium of an individual's response to a stimulus, the recording including a non-verbal response comprising facial expressions of the individual;(b) accessing the computer-readable medium for detecting and recording expressional repositioning of each of a plurality of selected facial features by conducting a computerized comparison of the facial position of each selected facial feature through sequential facial images;(c) coding contemporaneously detected and recorded expressional repositionings to at least one of an action unit, a combination of action units, or at least one emotion; and(d) analyzing the at least one of an action unit, a combination of action units, or at least one emotion to assess one or more characteristics of the individual to develop a profile of the individual's personality in relation to the objective for which the individual is being assessed, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises: identifying moments of the recording that elicited emotion based on the at least one of an action unit, a combination of action units, or at least one emotion; anddeveloping the profile of the individual's personality based on a percentage of positive versus negative emotions and the specific emotions shown during the stimulus. 2. The method of claim 1, wherein the received recording comprises a verbal response to the stimulus, and wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises assessing the at least one emotion against a portion of the individual's verbal response to assess one or more characteristics of the individual with respect to the individual's verbal response. 3. The method of claim 2, wherein the verbal responses are categorized by topic. 4. The method of claim 2, further comprising creating a transcript of at least a portion of the individual's verbal response, and analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises one or more of: identifying places in the transcript of emotional response;identifying the valence of the emotions for places in the transcript;identifying one or more emotions that are most predominant with respect to at least portions of the transcript; andidentifying discrepancies between the verbal response and emotive response of the individual. 5. The method of claim 1, wherein detecting and recording facial expressional repositioning of each of a plurality of selected facial features comprises recording the timing of the detected repositioning for peak emoting and real-time duration. 6. The method of claim 1, wherein coding contemporaneously detected and recorded expressional repositionings comprises automatically coding a single action unit or combination of action units to at least one corresponding emotion by percentage or type. 7. The method of claim 1, wherein coding contemporaneously detected and recorded expressional repositionings comprises coding a single action unit or combination of action units to a weighted value. 8. The method of claim 1, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises determining whether the individual's emotional response is predominantly positive, neutral, or negative. 9. The method of claim 1, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises quantifying the volume of emotion to determine the degree to which the individual is engaged or enthusiastic. 10. The method of claim 1, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises analyzing the degree of intensity for each action unit or combination of action units to determine the degree to which the individual is engaged or enthusiastic. 11. The method of claim 1, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises corresponding the at least one of an action unit, a combination of action units, or at least one emotion by stimulus type to relate emotional response data for the individual to a formula for determining the degree to which the individual fits one or more of the Big Five Factor model personality traits. 12. The method of claim 1, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises corresponding the at least one of an action unit, a combination of action units, or at least one emotion by stimulus type for determining the degree to which the individual is susceptible to one or more of the biases identified as part of Behavioral Economics. 13. The method of claim 1, wherein the stimulus comprises one or more of questions, statements, or scenarios. 14. The method of claim 13, wherein the objective the individual is being assessed for is the individual's suitability for a job position or task related to a job. 15. The method of claim 13, wherein the objective the individual is being assessed for is to determine potential romantic partners. 16. The method of claim 13, wherein the objective the individual is being assessed for is to ascertain one or more of emotional responses, potential veracity, personality type, and levels of enthusiasm for legal applications. 17. The method of claim 1, further comprising linking eye tracking data from the recording with the at least one of an action unit, a combination of action units, or at least one emotion. 18. The method of claim 1, wherein coding contemporaneously detected and recorded expressional repositionings to at least one of an action unit, a combination of action units, or at least one emotion comprises coding contemporaneously detected and recorded expressional repositionings to a plurality of weighted emotions. 19. A non-transitory machine-readable medium including instructions that, when executed by a machine, cause the machine to perform, operations comprising: (a) receiving a recording stored on a computer-readable medium of an individual's response to a stimulus, the recording including a non-verbal response comprising facial expressions of the individual;(b) accessing the computer-readable medium for automatically detecting and recording expressional repositioning of each of a plurality of selected facial features by conducting a computerized comparison of the facial position of each selected facial feature through sequential facial images;(c) automatically coding contemporaneously detected and recorded expressional repositionings to at least one of an action unit, a combination of action units, or at least one emotion; and(d) analyzing the at least one of an action unit, a combination of action units, or at least one emotion to assess one or more characteristics of the individual to develop a profile of the individual's personality in relation to the objective for which the individual is being assessed, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises: identifying moments of the recording that elicited emotion based on the at least one of an action unit, a combination of action units, or at least one emotion; anddeveloping the profile of the individual's personality based on a percentage of positive versus negative emotions and the specific emotions shown during the stimulus. 20. The machine-readable medium of claim 19, wherein the received recording comprises a verbal response to the stimulus, and wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises assessing the at least one emotion against a portion of the individual's verbal response to assess one or more characteristics of the individual with respect to the individual's verbal response. 21. The machine-readable medium of claim 20, further comprising instructions causing the machine to perform operations comprising creating a transcript of at least a portion of the individual's verbal response, and analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises one or more of: identifying places in the transcript of emotional response;identifying the valence of the emotions for places in the transcript;identifying one or more emotions that are most predominant with respect to at least portions of the transcript; andidentifying discrepancies between the verbal response and emotive response of the individual. 22. The machine-readable medium of claim 19, wherein coding contemporaneously detected and recorded expressional repositionings comprises automatically coding a single action unit or combination of action units to at least one corresponding emotion by percentage or type. 23. The machine-readable medium of claim 19, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises determining whether the individual's emotional response is predominantly positive, neutral, or negative. 24. The machine-readable medium of claim 19, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises quantifying the volume of emotion to determine the degree to which the individual is engaged or enthusiastic. 25. The machine-readable medium of claim 19, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises analyzing the degree of intensity for each action unit or combination of action units to determine the degree to which the individual is engaged or enthusiastic. 26. The machine-readable medium of claim 19, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises corresponding the at least one of an action unit, a combination of action units, or at least one emotion by stimulus type to relate emotional response data for the individual to a formula for determining the degree to which the individual fits one or more of the Big Five Factor model personality traits. 27. The machine-readable medium of claim 19, wherein analyzing the at least one of an action unit, a combination of action units, or at least one emotion comprises corresponding the at least one of an action unit, a combination of action units, or at least one emotion by stimulus type for determining the degree to which the individual is susceptible to one or more of the biases identified as part of Behavioral Economics. 28. The machine-readable medium of claim 19, further comprising instructions causing the machine to perform operations comprising linking eye tracking data from the recording with the at least one of an action unit, a combination of action units, or at least one emotion.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.