IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0645663
(2009-12-23)
|
등록번호 |
US-8265743
(2012-09-11)
|
발명자
/ 주소 |
- Aguilar, Mario
- Hawkins, Aaron
- Connolly, Patrick
- Qian, Ming
|
출원인 / 주소 |
- Teledyne Scientific & Imaging, LLC
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
6 인용 특허 :
23 |
초록
▼
Fixation-locked measurement of brain activity generates time-coded cues indicative of whether an operator exhibited a significant cognitive response to task-relevant stimuli. The free-viewing environment is one in which the presentation of stimuli is natural to the task encompassing both pre- and po
Fixation-locked measurement of brain activity generates time-coded cues indicative of whether an operator exhibited a significant cognitive response to task-relevant stimuli. The free-viewing environment is one in which the presentation of stimuli is natural to the task encompassing both pre- and post-fixation stimuli and the operator is allowed to move his or her eyes naturally to perform the task.
대표청구항
▼
1. A method of single-trial detection of significant cognitive responses to stimuli, comprising: measuring EEG data of an operator's brain activity to stimuli from a plurality of electrodes placed on the operator's scalp;tracking the operator's free eye movement to determine fixation events;applying
1. A method of single-trial detection of significant cognitive responses to stimuli, comprising: measuring EEG data of an operator's brain activity to stimuli from a plurality of electrodes placed on the operator's scalp;tracking the operator's free eye movement to determine fixation events;applying a fixation-locked window to the EEG data to generate a time segment of EEG data for each said fixation event;extracting one or more features from each time segment of EEG data;for each said fixation event, presenting said one or more features to a classifier to generate a fixation-locked cue as a likelihood output indicative of whether the operator exhibited a significant cognitive response to a presented stimulus; andoutputting a sequence of the cues with a time code of the associated fixation event. 2. The method of claim 1, wherein the cues and time-code are output in real time. 3. A method of single-trial detection of significant cognitive responses to stimuli, comprising: measuring EEG data of an operator's brain activity to stimuli from a plurality of electrodes placed on the operator's scalp;tracking the operator's free eye movement to determine fixation events;applying a fixation-locked window to the EEG data to generate a time segment of EEG data for each said fixation event, wherein the fixation-locked window comprises a pre-fixation window including EEG data before the fixation event and a post-fixation window including EEG data after the fixation event;extracting one or more features from each time segment of EEG data;for each said fixation event, presenting said one or more features to a classifier to generate a fixation-locked cue indicative of whether the operator exhibited a significant cognitive response to a presented stimulus; andoutputting a sequence of the cues with a time code of the associated fixation event. 4. The method of claim 3, wherein the cue is a binary decision output. 5. The method of claim 3, wherein the cue is a likelihood output. 6. The method of claim 3, wherein said one or more features are extracted separately from the EEG data in said pre-fixation and post-fixation windows. 7. The method of claim 6, wherein the features extracted from the EEG data in the pre-fixation window are presented to a pre-fixation locked sub-classifier that generates a pre-fixation cue, the features extracted from the EEG data in the post-fixation window are presented to a post-fixation locked sub-classifier that generates a post-fixation cue and the pre-fixation cue and post-fixation cue are presented to a fusion classifier that generates the cue. 8. The method of claim 7, wherein the sub-classifiers are trained by: monitoring a subject's EEG signals and eye-movement;displaying a pattern of targets and non-targets to the subject as stimuli;highlighting in sequence a different one of the targets or non-targets causing the subject's eyes to saccade to the highlighted target or non-target;monitoring subject input indicating subject detection of a highlighted target;processing the subject's eye-movement to determine a fixation-event for each highlighted target or non-target;applying a pre-fixation window and a post-fixation window to the EEG data to generate pre-fixation and post-fixation time segments of EEG data fox each said fixation event;separately extracting one or more pre-fixation and one or more post-fixation features from said pre-fixation and post-fixation time segments of EEG data, respectively; andfor each said fixation event, presenting said one or more pre-fixation features and a target/non-target supervised output to train the pre-fixation sub-classifier to generate a pre-fixation cue indicative of whether the subject exhibited a significant cognitive response, presenting said one or more post-fixation features and a target/non-target supervised output to train the post-fixation sub-classifier to generate a post-fixation cue indicative of whether the subject exhibited a significant cognitive response and presenting the pre-fixation and post-fixation cues and the target/non-target supervised output to train a fusion classifier to generate a fixation-locked cue indicative of whether the subject exhibited a significant cognitive response. 9. The method of claim 7, wherein said step of extracting features from the EEG data in the post-fixation window and presenting them to the post-fixation locked sub-classifier comprises: subdividing the time segment EEG data in the post-fixation window into a plurality of time sub-segments each with a different offset to the fixation-event;separately extracting features from each said time sub-segment of EEG data;presenting the extracted features to a respective plurality of spatial sub-classifiers trained to detect spatial patterns of said extracted features during different time segments after the fixation event and to generate first level outputs indicative of the occurrence or absence of a significant brain response; andcombining the plurality of spatial sub-classifier first level outputs to detect temporal patterns across the different time sub-segments relating to the evolution of the non-stationary brain response to task-relevant stimulus and to generate a second level output as the post-fixation cue indicative of the occurrence or absence of the significant non-stationary brain response. 10. The method of claim 9, wherein the first level outputs are combined using a feature-level fuser implemented using a probabilistic or recurrent learning method. 11. The method of claim 9, wherein the first level outputs are maximum likelihood estimates that are combined using a decision-level fuser to achieve an optimal combination of the maximum likelihood estimates achievable. 12. The method of claim 9, wherein said step of extracting features from the EEG data in the pre-fixation window and presenting the features to the pre-fixation locked sub-classifier comprises: subdividing the time segment EEG data in the pre-fixation window into a plurality of time sub-segments each with a different offset to the fixation-event;separately extracting features from each said time sub-segment of EEG data;presenting the extracted features to a respective plurality of spatial sub-classifiers trained to detect spatial patterns of said extracted features during different time segments before the fixation event and to generate first level outputs indicative of the occurrence or absence of a significant brain response; andcombining the plurality of spatial sub-classifier first level outputs to detect temporal patterns across the different time sub-segments relating to the evolution of the non-stationary brain response to task-relevant stimulus and to generate a second level output as the pre-fixation cue indicative of the occurrence or absence of the significant non-stationary brain response. 13. The method of claim 7, wherein the stimuli comprise both pre-fixation stimuli in which the stimulus precedes the fixation-event and post-fixation stimuli in which the stimulus and fixation-event coincide. 14. The method of claim 3, wherein the classifier generates a tag with each cue that classifies either the type of stimulus that triggered the response or the type of brain activity that triggered the response. 15. The method of claim 14, wherein the tag identifies a particular event related potential (ERP) that triggered the response. 16. The method of claim 3, further comprising: processing the sequence of time-coded cues to reinforce or reject the cue. 17. The method of claim 3, further comprising: computing a saccade metric from the free eye movement between fixation events. 18. The method of claim 3, wherein the presentation of stimuli to the operator is unconstrained in time and position with respect to the operator. 19. A method of single-trial detection of significant cognitive responses to stimuli, comprising: measuring EEG data of an operator's brain activity to stimuli from a plurality of electrodes placed on the operator's scalp;providing a time-code for the stimuli;tracking the operator's free eye movement to determine fixation events;applying a fixation-locked window to the EEG data to generate a time segment of EEG data for each said fixation event;extracting one or more features from each time segment of EEG data;for each said fixation event, presenting said one or more features to a classifier to generate a fixation-locked cue indicative of whether the operator exhibited a significant cognitive response to a presented stimulus;outputting a sequence of the cues with a time code of the associated fixation event; andcorrelating the time-code of the output cues with the time code of the stimuli. 20. An apparatus for single-trial detection of significant cognitive responses to stimuli, comprising: a plurality of electrodes configured to be placed on the operator's scalp that measure EEG data of the operator's brain activity to stimuli;an eye-tracking device that tracks the operator's free eye movement and generates eye movement signals;a fixation processor that processes the eye movement signals to generate time-code fixation events;a signal processor that applies a pre-fixation window to generate a time segment of EEG data before the fixation event and a post-fixation window to generate a time segment of EEG data after the fixation event to the EEG data for each said fixation event; anda cognitive response processor comprising a feature extractor that extracts one or more features from the EEG data in each of the pre-fixation and post-fixation windows for each time segment of EEG data and presents said one or more features to pre-fixation and post-fixation locked classifiers, respectively, to generate pre-fixation and post-fixation cues that are presented to a fusion classifier to generate a sequence of fixation-locked time-coded cues indicative of whether the operator exhibited a significant cognitive response to a stimulus. 21. The apparatus of claim 20, further comprising: a processor that computes a saccade metric from the free eye movement between fixation events. 22. An apparatus for single-trial detection of significant cognitive responses to stimuli, comprising: a plurality of electrodes configured to be placed placed on the operator's scalp that measure EEG data of the operator's brain activity to stimuli;an eye-tracking device that tracks the operator's free eye movement and generates eye movement signals;a fixation processor that processes the eye movement signals to generate time-code fixation events;a signal processor that applies a fixation-locked window to the EEG data to generate a time segment of EEG data for each said fixation event, wherein said signal processor subdivides the time segment EEG data into a plurality of time sub-segments each with a different offset to the fixation-event; anda cognitive response processor comprising a feature extractor that extracts one or more features from each said time sub-segment of EEG data and presents the extracted features to a respective plurality of spatial sub-classifiers trained to detect spatial patterns of said extracted features during different time sub-segments and to generate first level outputs indicative of the occurrence or absence of a significant brain response, said plurality of first level outputs presented to a temporal classifier trained to detect temporal patterns across the different time sub-segments relating to the evolution of the non-stationary brain response to task-relevant stimulus and to generate a second level output as a sequence of fixation-locked time-coded cues indicative of the occurrence or absence of the significant non-stationary brain response to a stimulus. 23. A method of training a classifier for single-trial detection of significant cognitive responses to pre-fixation and post-fixation stimuli, comprising: monitoring a subject's EEG signals and eye-movement;displaying a pattern of targets and non-targets to the subject;highlighting in sequence a different one of the targets or non-targets causing the subject's eyes to saccade to the highlighted target or non-target;monitoring subject input indicating subject detection of a highlighted target;processing the subject's eye-movement to determine a fixation-event for each highlighted target or non-target;applying a fixation-locked window to the EEG data to generate a time segment of EEG data for each fixation-event;extracting one or more features from said time segment of EEG data; andfor each said fixation event, presenting said one or more features and a target/non-target supervised output to train the classifier to generate a fixation-locked cue indicative of whether the subject exhibited a significant cognitive response to a target. 24. The method of claim 23, wherein said fixation-locked window comprises a pre-fixation window and a post-fixation window and said classifier comprises a pre-fixation locked sub-classifier, a post-fixation locked sub-classifier and a fusion classifier, the method further comprising: separately extracting one or more pre-fixation and one or more post-fixation features from pre-fixation and post-fixation time segments of EEG data, respectively; andfor each said fixation event, presenting said one or more pre-fixation features and the target/non-target supervised output to train the pre-fixation sub-classifier to generate a pre-fixation cue indicative of whether the subject exhibited a significant cognitive response to targets, presenting said one or more post-fixation features and the target/non-target supervised output to train the post-fixation sub-classifier to generate a post-fixation cue indicative of whether the subject exhibited a significant cognitive response to targets and presenting the pre- and post-fixation cues and the target/non-target supervised output to train a fusion classifier to generate a fixation-locked cue indicative of whether the subjected exhibited a significant cognitive response to targets. 25. The method of claim 23, wherein the pattern of targets and non-targets is rearranged periodically. 26. The method of claim 23, wherein the pattern is displayed until a specified number of targets is detected. 27. The method of claim 23, wherein the pattern is displayed on background reminiscent of an expected background the subject will be confronted with during task performance.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.