IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0138023
(2005-05-26)
|
등록번호 |
US-8249296
(2012-08-21)
|
발명자
/ 주소 |
- Silver, William M.
- Phillips, Brian S.
|
출원인 / 주소 |
- Cognex Technology and Investment Corporation
|
인용정보 |
피인용 횟수 :
0 인용 특허 :
122 |
초록
▼
Disclosed are methods and apparatus for automatic visual detection of events, for recording images of those events and retrieving them for display and human or automated analysis, and for sending synchronized signals to external equipment when events are detected. An event corresponds to a specific
Disclosed are methods and apparatus for automatic visual detection of events, for recording images of those events and retrieving them for display and human or automated analysis, and for sending synchronized signals to external equipment when events are detected. An event corresponds to a specific condition, among some time-varying conditions within the field of view of an imaging device, that can be detected by visual means based on capturing and analyzing digital images of a two-dimensional field of view in which the event may occur. Events may correspond to rare, short duration mechanical failures for which obtaining images for analysis is desirable. Events are detected by considering evidence obtained from an analysis of multiple images of the field of view, during which time moving mechanical components can be seen from multiple viewing perspectives.
대표청구항
▼
1. A method for automatic visual detection and reporting of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the event comprising a motion of an object along a path, the path
1. A method for automatic visual detection and reporting of an event, comprising: capturing a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the event comprising a motion of an object along a path, the path having a mark point, the event further comprising a mark time at which the object is located at the mark point;choosing, responsive to a first analysis of the plurality of frames, a plurality of event frames from the plurality of frames, such that the first analysis indicates sufficient evidence that the object is located along the path for each frame of the plurality of event frames;obtaining a plurality of capture times corresponding to the plurality of event frames, each capture time of the plurality of capture times being a function of a time at which the corresponding event frame was captured;computing, responsive to a second analysis of the plurality of event frames, a plurality of location values, each location value of the plurality of location values responsive to a position of the object along the path in an associated event frame as computed by the second analysis;determining the mark time using the plurality of location values and the plurality of capture times; andproducing, by an output signaler, a signal at a report time that follows the mark time by a delay interval. 2. The method of claim 1, wherein each capture time of the plurality of capture times is proportional to a time at which the corresponding event frame was captured. 3. The method of claim 1, wherein each capture time of the plurality of capture times is an encoder count obtained at a time at which the corresponding event frame was captured. 4. The method of claim 1, wherein determining the mark time comprises fitting a curve to the plurality of location values and the plurality of capture times. 5. The method of claim 4, wherein the curve is a line. 6. The method of claim 4, wherein the curve is a parabola. 7. The method of claim 4, wherein the curve has an apex; anddetermining the mark time further comprises determining the apex of the curve. 8. The method of claim 1, wherein the event comprises a flow event, the flow event comprising motion of the object across the mark point. 9. The method of claim 1, wherein the event comprises a stroke event, the stroke event comprising motion of advance and retreat along the path. 10. The method of claim 1, wherein the path has an apex; andthe mark point is the apex of the path. 11. The method of claim 1, wherein the plurality of frames are captured, and the first analysis and second analysis are performed, at a rate of not less than two hundred frames per second. 12. The method of claim 1, wherein the field of view comprises no more than about 40,000 pixels. 13. The method of claim 1, further comprising selecting an event type from a plurality of event types, the event type corresponding to the motion of the object along the path; andwherein determining the mark time is responsive to the event type. 14. The method of claim 13, wherein the plurality of event types comprises a flow event type and a stroke event type. 15. The method of claim 13, wherein selecting the event type is responsive to a human-machine interface. 16. The method of claim 1, wherein the delay interval is selected responsive to a human-machine interface. 17. A system for automatic visual detection and reporting of an event, comprising: an imager configured to perform a capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the event comprising a motion of an object along a path, the path having a mark point, the event further comprising a mark time at which the object is located at the mark point;a first selection process that chooses a plurality of event frames from the plurality of frames, such that the first selection process judges that there is sufficient evidence that the object is located along the path for each frame of the plurality of event frames;a timing process that obtains a plurality of capture times corresponding to the plurality of event frames, each capture time of the plurality of capture times being a function of a time at which the corresponding event frame was captured;a first analysis process that computes a plurality of location values, each location value of the plurality of location values responsive to a position of the object along the path in an associated event frame;a second analysis process that determines the mark time using the plurality of location values and the plurality of capture times; anda input/output module configured to perform an output process that produces a signal at a report time that follows the mark time by a delay interval. 18. The system of claim 17, wherein each capture time of the plurality of capture times is proportional to a time at which the corresponding event frame was captured. 19. The system of claim 17, wherein each capture time of the plurality of capture times is an encoder count obtained at a time at which the corresponding event frame was captured. 20. The system of claim 17, wherein the second analysis process comprises fitting a curve to the plurality of location values and the plurality of capture times. 21. The system of claim 20, wherein the curve is a line. 22. The system of claim 20, wherein the curve is a parabola. 23. The system of claim 20, wherein the curve has an apex; andthe second analysis process further comprises determining the apex of the curve. 24. The system of claim 17, wherein the event comprises a flow event, the flow event comprising motion of the object across the mark point. 25. The system of claim 17, wherein the event comprises a stroke event, the stroke event comprising motion of advance and retreat along the path. 26. The system of claim 17, wherein the path has an apex; andthe mark point is the apex of the path. 27. The system of claim 17, wherein the capture process, the first selection process, and the first analysis process operate at a rate of not less than two hundred frames per second. 28. The system of claim 17, wherein the capture process comprises an image capture device comprising no more than about 40,000 pixels. 29. The system of claim 17, further comprising a second selection process that selects an event type from a plurality of event types, the event type corresponding to the motion of the object along the path; andwherein the second analysis process determines the mark time using the event type. 30. The system of claim 29, wherein the plurality of event types comprises a flow event type and a stroke event type. 31. The system of claim 29, wherein the second analysis process comprises a human-machine interface. 32. The system of claim 17, further comprising a human-machine interface that selects the delay interval. 33. A system for automatic visual detection and reporting of an event, comprising: an imager with a global shutter configured to perform capture process that captures a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the event comprising a motion of an object along a path, the path having a mark point, the event further comprising a mark time at which the object is located at the mark point;a first selection process that chooses a plurality of event frames from the plurality of frames, such that the first selection process judges that there is sufficient evidence that the object is located along the path for each frame of the plurality of event frames;a timing process that obtains a plurality of capture times corresponding to the plurality of event frames, each capture time of the plurality of capture times being a function of a time at which the corresponding event frame was captured;a first analysis process that computes a plurality of location values, each location value of the plurality of location values responsive to a position of the object along the path in an associated event frame;a second analysis process that determines the mark time using the plurality of location values and the plurality of capture times; anda input/output module configured to perform an output process that produces a signal at a report time that follows the mark time by a delay interval. 34. The system of claim 33, wherein each capture time of the plurality of capture times is proportional to a time at which the corresponding event frame was captured. 35. The system of claim 33, wherein each capture time of the plurality of capture times is an encoder count obtained at a time at which the corresponding event frame was captured. 36. The system of claim 33, wherein the second analysis process comprises fitting a curve to the plurality of location values and the plurality of capture times. 37. An article of manufacture including a tangible computer-readable medium having instructions stored thereon that if executed by a computing device, cause the computing device to perform operations comprising: receiving a plurality of frames, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which an event occurs, the event comprising a motion of an object along a path, the path having a mark point, the event further comprising a mark time at which the object is located at the mark point;selecting a plurality of event frames from the plurality of frames based on that there is sufficient evidence that the object is located along the path for each frame of the plurality of event frames;obtaining a plurality of capture times corresponding to the plurality of event frames, each capture time of the plurality of capture times being a function of a time at which the corresponding event frame was captured;computing a plurality of location values, each location value of the plurality of location values responsive to a position of the object along the path in an associated event frame;computing the mark time using the plurality of location values and the plurality of capture times; andproviding a value responsive to the mark time for the consumption by an input/output module, wherein the input/output module uses the value for control in an industrial process. 38. The article of claim 37, wherein each capture time of the plurality of capture times is an encoder count obtained at a time at which the corresponding event frame was captured by an imager with a global shutter. 39. The article of claim 37, wherein the capture times are a function of the time the event frame was captured by an imager with a global shutter. 40. The article of claim 37, the tangible computer-readable medium having further instructions stored thereon, that if executed by a computing device, cause the computing device to perform operations further comprising: fitting a curve to the plurality of location values and the plurality of capture times. 41. The article of claim 37, wherein computing the location values and computing the mark time are performed at a rate of not less than two hundred frames per second. 42. A system for automatic visual detection and reporting of an event, comprising: a data processing device programmed to:receive a plurality of frames, captured by an imager with a global shutter, each frame in the plurality of frames comprising an image of a two-dimensional field of view in which the event occurs, the event comprising a motion of an object along a path, the path having a mark point, the event further comprising a mark time at which the object is located at the mark point;choose a plurality of event frames from the plurality of frames based on sufficient evidence that the object is located along the path for each frame of the plurality of event frames;obtain a plurality of capture times corresponding to the plurality of event frames, each capture time of the plurality of capture times being a function of a time at which the corresponding event frame was captured;compute a plurality of location values, each location value of the plurality of location values responsive to a position of the object along the path in an associated event frame;calculate the mark time using the plurality of location values and the plurality of capture times; andprovide a value responsive to the mark time for the consumption by an input/output module, wherein the input/output module uses the value for control in an industrial process. 43. The system of claim 42, wherein each capture time of the plurality of capture times is an encoder count obtained at a time at which the corresponding event frame was captured. 44. The system of claim 42, further comprising: fitting a curve to the plurality of location values and the plurality of capture times. 45. The system of claim 42, wherein computing the location values and computing the mark time are performed, at a rate of not less than two hundred frames per second.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.