Method and apparatus for automatic visual event detection
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
G06K-009/34
출원번호
US-0138033
(2005-05-26)
등록번호
US-8249297
(2012-08-21)
발명자
/ 주소
Silver, William M.
출원인 / 주소
Cognex Technology and Investment Corporation
인용정보
피인용 횟수 :
1인용 특허 :
115
초록▼
Disclosed are methods and apparatus for automatic visual detection of events, for recording images of those events and retrieving them for display and human or automated analysis, and for sending synchronized signals to external equipment when events are detected. An event corresponds to a specific
Disclosed are methods and apparatus for automatic visual detection of events, for recording images of those events and retrieving them for display and human or automated analysis, and for sending synchronized signals to external equipment when events are detected. An event corresponds to a specific condition, among some time-varying conditions within the field of view of an imaging device, that can be detected by visual means based on capturing and analyzing digital images of a two-dimensional field of view in which the event may occur. Events may correspond to rare, short duration mechanical failures for which obtaining images for analysis is desirable. Events are detected by considering evidence obtained from an analysis of multiple images of the field of view, during which time moving mechanical components can be seen from multiple viewing perspectives.
대표청구항▼
1. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the event comprising a motion of a predetermined object;computing, using a proc
1. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the event comprising a motion of a predetermined object;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; anddetermining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred. 2. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights;determining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred; andproducing a signal that is synchronized with a time at which the event occurs. 3. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; anddetermining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred, the event analysis comprising a text string representing an expression in a syntax substantially similar to the syntax of a conventional programming language. 4. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the field of view comprising no more than about 40,000 pixels;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; anddetermining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred. 5. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights;determining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred; andwherein the steps of the method are performed at a rate of not less than two hundred frames per second. 6. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting, responsive to an activity analysis of the plurality of event detection weights, a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights, the activity analysis comprisingentering an active state when the visual analysis computes an event detection weight that indicates sufficient evidence that the event is occurring; andexiting the active state when the visual analysis computes a plurality of event detection weights that each indicate insufficient evidence that the event is occurring; anddetermining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred. 7. A method for automatic visual detection of an event, comprising: using a human-machine interface to specify the event, the human-machine interface comprising a logic view;capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; anddetermining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred. 8. A system for automatic visual detection of an event, comprising: a capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the event comprising a motion of a predetermined object;a visual analysis process adapted to be carried out by at least one of a collection of digital hardware elements and a collection of computer software instructions residing on a non-transitory computer-readable medium that computes, responsive to the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;a selection process that selects a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; andan event analysis process that determines, responsive to the subset of the plurality of event detection weights, whether the event has occurred. 9. A system for automatic visual detection of an event, comprising: a capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;a visual analysis process adapted to be carried out by at least one of a collection of digital hardware elements and a collection of computer software instructions residing on a non-transitory computer-readable medium that computes, responsive to the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;a selection process that selects a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights;an event analysis process that determines, responsive to the subset of the plurality of event detection weights, whether the event has occurred; andan output process that produces a signal that is synchronized with a time at which the event occurs. 10. A system for automatic visual detection of an event, comprising: a capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;a visual analysis process adapted to be carried out by at least one of a collection of digital hardware elements and a collection of computer software instructions residing on a non-transitory computer-readable medium that computes, responsive to the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;a selection process that selects a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; andan event analysis process that determines, responsive to the subset of the plurality of event detection weights, whether the event has occurred, the event analysis process comprising a text string representing an expression in a syntax substantially similar to the syntax of a conventional programming language. 11. A system for automatic visual detection of an event, comprising: a capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the capture process comprising an imager comprising no more than about 40,000 pixels;a visual analysis process adapted to be carried out by at least one of a collection of digital hardware elements and a collection of computer software instructions residing on a non-transitory computer-readable medium that computes, responsive to the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;a selection process that selects a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; andan event analysis process that determines, responsive to the subset of the plurality of event detection weights, whether the event has occurred. 12. A system for automatic visual detection of an event, comprising: a capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;a visual analysis process adapted to be carried out by at least one of a collection of digital hardware elements and a collection of computer software instructions residing on a non-transitory computer-readable medium that computes, responsive to the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;a selection process that selects a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights;an event analysis process that determines, responsive to the subset of the plurality of event detection weights, whether the event has occurred; andwherein the capture process, the visual analysis process, the selection process, and the event analysis process operate at a rate of not less than two hundred frames per second. 13. A system for automatic visual detection of an event, comprising: a capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;a visual analysis process adapted to be carried out by at least one of a collection of digital hardware elements and a collection of computer software instructions residing on a non-transitory computer-readable medium that computes, responsive to the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;an activity analysis process that selects, responsive to the plurality of event detection weights, a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights, the activity analysis process comprising an active state that is entered when the visual analysis process computes an event detection weight that indicates sufficient evidence that the event is occurring, and that is exited when the visual analysis process computes a plurality of event detection weights that each indicate insufficient evidence that the event is occurring; andan event analysis process that determines, responsive to the subset of the plurality of event detection weights, whether the event has occurred. 14. A system for automatic visual detection of an event, comprising: a human-machine interface that specifies the event, the human-machine interface comprising a logic view;a capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;a visual analysis process adapted to be carried out by at least one of a collection of digital hardware elements and a collection of computer software instructions residing on a non-transitory computer-readable medium that computes, responsive to the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;a selection process that selects a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; andan event analysis process that determines, responsive to the subset of the plurality of event detection weights, whether the event has occurred. 15. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, wherein each frame comprises intensity values from a particular integration time;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights; anddetermining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred. 16. The method of claim 15 wherein the plurality of frames are captured using an imager with a global shutter. 17. A method for automatic visual detection of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs;computing, using a processor, responsive to a visual analysis of the plurality of frames, a plurality of event detection weights, each event detection weight of the plurality of event detection weights corresponding respectively to each frame of the plurality of frames and comprising evidence that the event is occurring in the field of view;selecting, responsive to an activity analysis of the plurality of event detection weights, a plurality of event frames from the plurality of frames, the plurality of event frames corresponding to a subset of the plurality of event detection weights, wherein only those frames in which there is sufficient evidence that the event has occurred are selected as a result of the activity analysis, the activity analysis comprising, entering an active state when the visual analysis computes an event detection weight that indicates sufficient evidence that the event is occurring; andexiting the active state when the visual analysis computes a plurality of event detection weights that each indicate insufficient evidence that the event is occurring; anddetermining, responsive to an event analysis of the subset of the plurality of event detection weights, whether the event has occurred.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (115)
Hansen, Michael Wade; Burt, Peter Jeffrey, Apparatus and a method for detecting motion within an image sequence.
Suzuki Masato (Ibaraki JPX) Inaba Hiromi (Katsuta JPX) Nakamura Kiyoshi (Katsuta JPX) Nakata Naofumi (Katsuta JPX) Yamani Hiroaki (Katsuta JPX) Oonuma Naoto (Hitachi JPX), Apparatus and methods for detecting number of people waiting in an elevator hall using plural image processing means wit.
Tsujino,Hiroshi; Kondo,Hiroshi; Miura,Atsushi; Nagai,Shinichi; Akatsuka,Koji, Apparatus, program and method for detecting both stationary objects and moving objects in an image using optical flow.
Eleftheriadis Alexandros ; Jacquin Arnaud Eric, Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video.
Corwin Thomas L. (McLean VA) Richardson Henry R. (Alexandria VA) Kuo Stanley D. (Arlington VA) Stefanick Tom A. (Arlington VA) Keeler R. Norris (McLean VA) Pflibsen Kent (Tucson AZ) Calmes Lonnie K. , Automatic target detection process.
Pfeiffer Carl G. ; Tsai Cheng-Chih ; Gumas D. Spyro ; Calingaert Christopher ; Nguyen Danny D., Background adaptive target detection and tracking with multiple observation and processing stages.
Goren David P. (Ronkonkoma NY) Pavlidis Theodosios (Setauket NY) Spitz Glenn (Far Rockaway NY), Decoding bar codes from multiple scans using element replacement.
Baharav,Izhak; Blalock,Travis N.; Machida,Akihiro; Smith,George E.; Ang,Jin Kiong, Imaging system and apparatus for combining finger recognition and finger navigation.
Landt Jeremy A. ; Berka Ivan,CAX ; Carrender Curt L. ; Mortenson G. Russell ; Sondhi Vickram,CAX ; Speirs Donald F., Integrated multi-meter and wireless communication link.
Douglas James Beck ; Clarence Keith Griggs ; Jeffrey Erickson Roeca ; Jeffrey John Haeffele ; Mason Bradfield Samuels, Integrated trigger function display system and methodology for trigger definition development in a signal measurement system having a graphical user interface.
Cyril C. Marrion, Jr. ; Ivan A. Bachelder ; Edward A. Collins, Jr. ; Masayoki Kawata JP; Sateesh G. Nadabar, Machine vision system for identifying and assessing features of an article.
Scola Joseph R. ; Ruzhitsky Vladimir N. ; Jacobson Lowell D., Machine vision system for object feature analysis and validation based on multiple object images.
Nishi Noriyuki (Osaka JPX) Muto Tadashi (Yamatokoriyama JPX) Takayama Shinichi (Suzuka JPX), Method and apparatus for inspecting the cleanliness of top slibers.
Silver William M. (Medfield MA) Druker Samuel (Brookline MA) Romanik Philip (West Haven CT) Arbogast Carroll (Needham MA), Method and apparatus for interactively generating a computer program for machine vision analysis of an object.
Gerst, III, Carl W.; Equitz, William H.; Testa, Justin; Nadabar, Sateesh, Method and apparatus for providing omnidirectional lighting in a scanning device.
White Stanley A. ; Walley Kenneth S. ; Johnston James W. ; Henderson P. Michael ; Hale Kelly H. ; Andrews ; Jr. Warner B. ; Siann Jonathan I., Method and apparatus for sensing an audio signal that is sensitive to the audio signal and insensitive to background noise.
Schneider Volker Rainer,DEX ; Braach Hans-Joachim,DEX, Method and device for the automatic detection of surface defects for continuously cast products with continuous mechanical removal of the material.
Brooksby, Glen William; Mundy, Joseph Leagrand, Method for high dynamic range image construction based on multiple images with multiple illumination intensities.
Longacre ; Jr. Andrew (Skaneateles NY) Hammond ; Jr. Charles M. (Skaneateles NY) Havens William H. (Skaneateles NY) Pidhirny John M. (Skaneateles NY), Method of programmable digitization and bar code scanning apparatus employing same.
Cox Kenneth A. (Midlothian VA) Dante Henry M. (Midlothian VA) Maher Robert J. (Midlothian VA), Methods and apparatus for optically determining the acceptability of products.
Marrs David ; Bruno Louis ; Guszcza Joseph ; Meier Timothy ; Pankow Matthew ; Parker James A. ; Pettinelli John ; Randolph Bradley ; Reynolds Andrew ; Ruhlman Thomas, Multiple application multiterminal data collection network.
Michael David J. ; Wallack Aaron, Nonfeedback-based machine vision method for determining a calibration relationship between a camera and a moveable obje.
Heinrich, Harley Kent; Cesar, Christian Lenz; Cofino, Thomas A.; Friedman, Daniel J.; Goldman, Kenneth Alan; Greene, Sharon Louise; McAuliffe, Kevin P., Radio frequency identification system write broadcast capability.
Kubler, Joseph Jay; Grabon, Robert James, Radio frequency identification systems and methods for waking up data storage devices for wireless communication.
Brent G. Robertson ; Glenn W. Lee ; Roger J. Colburn, Scanning system for decoding two-dimensional barcode symbologies with a one-dimensional general purpose scanner.
Ferlitsch,Andrew Rodney; DeVore,Darwin Alan, Systems and methods for manipulating electronic information using a three-dimensional iconic representation.
Glier, Michael T.; Laird, Mark D.; Tinnemeier, Michael T.; Small, Steven I.; Sybel, Randall T., Traffic light violation prediction and recording system.
Eskridge, Thomas C.; Newberry, Jeff E.; DeYong, Mark R.; Dunn, Scott A.; Huffstutter, Wesley K.; Grace, John W.; Lumeyer, Marc A.; Ellison, Michael A.; Zoch, John R., User interface for automated optical inspection systems.
Ekchian Leon K. (Northridge CA) Johnson David D. (Simi Valley CA) Smith William F. (Los Angeles CA), Vector neural network for low signal-to-noise ratio detection of a target.
Michalopoulos Panos G. (St. Paul MN) Fundakowski Richard A. (St. Paul MN) Geokezas Meletios (White Bear Lake MN) Fitch Robert C. (Roseville MN), Vehicle detection through image processing for traffic surveillance and control.
Gasperi Michael L. (Racine WI) Roszkowski Richard M. (Brookfield WI) Christian Donald J. (New Berlin WI) Deklotz Joseph E. (Genesee WI), Video image processing system.
Whitman, Steven M.; Tremblay, Robert; Arbogast, Jr., Carroll McNeill, System, method and graphical user interface for displaying and controlling vision system operating parameters.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.