IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0136019
(2005-05-24)
|
등록번호 |
US-8249329
(2012-08-21)
|
발명자
/ 주소 |
|
출원인 / 주소 |
- Cognex Technology and Investment Corporation
|
인용정보 |
피인용 횟수 :
12 인용 특허 :
122 |
초록
▼
Disclosed are methods and apparatus for automatic optoelectronic detection and inspection of objects, based on capturing digital images of a two-dimensional field of view in which an object to be detected or inspected may be located, analyzing the images, and making and reporting decisions on the st
Disclosed are methods and apparatus for automatic optoelectronic detection and inspection of objects, based on capturing digital images of a two-dimensional field of view in which an object to be detected or inspected may be located, analyzing the images, and making and reporting decisions on the status of the object. Decisions are based on evidence obtained from a plurality of images for which the object is located in the field of view, generally corresponding to a plurality of viewing perspectives. Evidence that an object is located in the field of view is used for detection, and evidence that the object satisfies appropriate inspection criteria is used for inspection. Methods and apparatus are disclosed for capturing and analyzing images at high speed so that multiple viewing perspectives can be obtained for objects in continuous motion.
대표청구항
▼
1. A method for detecting an object, and determining a a mark count of the object, comprising: using a conveyor having motion relative to a two-dimensional field of view to transport the object;inputting an encoding signal responsive to the motion of the conveyor, from which can be obtained at desir
1. A method for detecting an object, and determining a a mark count of the object, comprising: using a conveyor having motion relative to a two-dimensional field of view to transport the object;inputting an encoding signal responsive to the motion of the conveyor, from which can be obtained at desired times a corresponding encoder count indicating a relative location of the conveyor;capturing a plurality of frames, each frame of the plurality of frames comprising an image of the two-dimensional field of view;choosing, responsive to a first analysis of the plurality of frames, a plurality of active frames from the plurality of frames, such that the first analysis indicates sufficient evidence that the object is located in the field of view for each frame of the plurality of active frames;obtaining a plurality of capture counts corresponding to the plurality of active frames, each capture count of the plurality of capture counts being responsive to the encoder count corresponding to a time at which the corresponding active frame was captured;computing, responsive to a second analysis of the plurality of active frames, a plurality of location values, each location value of the plurality of location values responsive to a position of the object in the field of view in an associated active frame as computed by the second analysis;determining the mark count of the object using the plurality of location values and the plurality of capture counts, the mark count indicates the encoder count corresponding to a time when the object was located at a fixed mark point; andindicating a location of the object by producing a signal at a report time that occurs when the corresponding encoder count differs from the mark count by a delay count. 2. The method of claim 1 wherein determining the mark count comprises interpolating a pair of capture counts and a corresponding pair of location values. 3. The method of claim 1 wherein determining the mark count comprises extrapolating from the capture counts and the location values. 4. The method of claim 1 wherein determining the mark count comprises fitting a line to at least three capture counts and corresponding location values. 5. The method of claim 1, further comprising using a first-in first-out buffer to hold information needed for producing the signal at the report time. 6. The method of claim 1, wherein the location of the object, at the report time, is a downstream position that is separated from a mark point by a distance determined by the delay count. 7. The method of claim 6, further comprising adjusting the delay count so that the downstream position corresponds to a desired location of the object. 8. The method of claim 7, wherein the desired location corresponds to an actuator location. 9. The method of claim 8, further comprising judging, responsive to a third analysis of the plurality of active frames, whether the object satisfies an inspection criterion; and wherein producing the signal further comprises producing the signal only if the third analysis judges that the object does not satisfy the inspection criterion; andthe actuator is a reject actuator; and further comprising using the signal to control the reject actuator. 10. The method of claim 7, wherein adjusting the delay count is responsive to a human-machine interface. 11. The method of any of claims 1 and 2-10 further comprising using the signal to indicate that the object was detected. 12. The method of any of claims 1 or 2-10, wherein the plurality of location values are computed responsive to at least one locator, the locator having a search range and oriented so that the search range is substantially parallel to a direction of motion of the object. 13. A system for detecting an object, and determining a mark count of the object, comprising: a conveyer, having motion relative to a two-dimensional field of view, that transports the object;an input device that receives an encoding signal responsive to the motion of the conveyer, from which can be obtained at desired times a corresponding encoder count indicating a relative location of the conveyer;an image capture device that captures a plurality of frames, each frame of the plurality of frames comprising an image of the two-dimensional field of view; andan analyzer that chooses, responsive to a first analysis of the plurality of frames, a plurality of active frames from the plurality of frames, such that the first analysis indicates sufficient evidence that the object is located in the field of view for each frame of the plurality of active frames;obtains a plurality of capture counts corresponding to the plurality of active frames, each capture count of the plurality of capture counts being responsive to the encoder count corresponding to a time at which the corresponding active frame was captured;computes, responsive to a second analysis of the plurality of active frames, a plurality of location values, each location value of the plurality of location values responsive to a position of the object in the field of view in an associated active frame as computed by the second analysis;determines the mark count of the object using the plurality of location values and the plurality of capture counts, wherein the a mark count indicates the encoder count corresponding to a time when the object was located at a fixed mark point; andan output signaler that indicates a location of the object by producing a signal at a report time that occurs when the corresponding encoder count differs from the mark count by a delay count. 14. The system of claim 13 wherein the analyzer determines the mark count by interpolating a pair of capture counts and a corresponding pair of location values. 15. The system of claim 13 wherein the analyzer determines the mark count by extrapolating from the capture counts and the location values. 16. The system of claim 13 wherein the analyzer determines the mark count by fitting a line to at least three capture counts and corresponding location values. 17. The system of claim 13, further comprising a first-in first-out buffer that holds information needed by the output signaler for producing the signal at the report time. 18. The system of claim 13, wherein the location of the object, at the report time, is a downstream position that is separated from the mark point by a distance determined by the delay count. 19. The system of claim 18, further comprising a controller that adjusts the delay count so that the downstream position corresponds to a desired location of the object. 20. The system of claim 19, further comprising an actuator having a location from which to act on the object; and wherein the desired location corresponds to the location of the actuator. 21. The system of claim 20, wherein the analyzer judges, responsive to a third analysis of the plurality of active frames, whether the object satisfies an inspection criterion; the output signaler produces the signal only if the analyzer judges that the object does not satisfy the inspection criterion;the actuator is a reject actuator; andthe signal is used to control the reject actuator. 22. The system of claim 19, wherein the controller comprises a human-machine interface. 23. The system of any of claims 13, 14-22, wherein the analyzer computes the plurality of location values responsive to at least one locator, the locator having a search range and oriented so that the search range is substantially parallel to a direction of motion of the object. 24. A system for computing a mark count of an object transported by a transport medium, the transport medium in relative motion to a two-dimensional field of view, the object comprising a set of visible features, the set of visible features containing at least one visible feature; the system comprising: a data processing device programmed to:receive an encoder signal responsive to the relative motion of the transport medium, from which is obtained an encoder count indicating a relative location of the transport medium;receive a plurality of active frames, each frame of the plurality of active frames comprising an image of the two-dimensional field of view;obtain a plurality of capture counts corresponding to the plurality of active frames, each capture count of the plurality of capture counts being responsive to the encoder count corresponding to a time at which the corresponding active frame was captured;compute a plurality of location values based on the plurality of active frames, each location value of the plurality of location values responsive to a position of a visible feature of the set of visible features of the object in an associated active frame; andcompute the mark count of the object using at least a portion of the plurality of location values and at least a portion of the plurality of capture counts, wherein the mark count indicates the encoder count corresponding to a time when the object was located at a fixed mark point; andproviding a value responsive to the mark count for consumption by an i/o module, wherein the i/o module uses the value responsive to the mark count to selectively signal a location of the object at a report time that occurs when the encoder count differs from the mark count by a delay count. 25. The system of claim 24, wherein the data processing device is further programmed to send the value to a first-in first-out buffer. 26. The system of claim 24, wherein the location of the object, at the report time, is a downstream position that is separated from the mark point by a distance determined by the delay count. 27. The system of claim 26, further comprising a controller that adjusts the delay count so that the downstream position corresponds to a desired location of the object. 28. The system of claim 27, wherein the downstream position corresponds to a location of an actuator. 29. The system of claim 24, wherein the data processing device is further programmed to: judge, responsive to an analysis of the plurality of active frames, whether the object satisfies an inspection criterion; andsignal a reject actuator only when the object does not satisfy the inspection criterion. 30. The system of claim 27, wherein the controller comprises a human-machine interface. 31. The system of claim 24, wherein computing the mark count further comprises fitting a line to at least a portion of the plurality of location values and at least a portion of the plurality of capture counts. 32. The system of claim 24, wherein computing the mark count further comprises interpolating a pair of capture counts and a corresponding pair of location values. 33. The system of claim 24, wherein computing the mark count further comprises extrapolating from the capture counts and the location values. 34. The system of claim 24, wherein the data processing device is further programmed to receive a succession of frames, each frame of the succession of frames comprising an image of the two-dimensional field of view, such that the plurality of active frames comprises a subset of the succession of frames; and the data processing device is further programmed to: compute a plurality of object detection weights based on and corresponding respectively to the succession of frames, each object detection weight of the plurality of object detection weights comprising evidence that the object is located in the two-dimensional field of view for the corresponding frame; wherein the succession of frames comprise images recorded at least before and after the object is located in the two-dimensional field of view; andselect as the plurality of active frames a portion of the succession of frames where the evidence is sufficient. 35. The system of claim 24, wherein the received frames are obtained by an imager with a global shutter. 36. The system of claim 35, wherein the data processing device is further programmed to compute the plurality of location values responsive to at least one locator, the locator having a search range and oriented so that the search range is substantially parallel to a direction of motion of the object. 37. A computer program product to compute a mark count of an object transported by a transport medium, the transport medium in relative motion with a two-dimensional field of view, the object comprising a set of visible features, the set of visible features containing at least one visible feature, the product tangibly embodied in a non-transitory computer readable medium, the computer program product comprising instructions being operable to cause a data processing apparatus to: receive, an encoder signal responsive to the relative motion of the transport medium from which is obtained an encoder count indicating a relative location of the transport medium;receive a plurality of active frames, each frame of the plurality of active frames comprising an image of the two-dimensional field of view;obtain a plurality of capture counts corresponding to the plurality of active frames, each capture count of the plurality of capture counts being responsive to the encoder count corresponding to a time at which the corresponding active frame was captured;compute a plurality of location values based on the plurality of active frames, each location value of the plurality of location values responsive to a position of a visible feature of the set of visible features of the object in an associated active frame; andcompute the mark count of the object using at least a portion of the plurality of location values and at least a portion of the plurality of capture counts, wherein the mark count indicates the encoder count corresponding to a time when the object was located at a fixed mark point; andproviding a value responsive to the mark count for consumption by an i/o module, wherein the i/o module uses the mark count to selectively signal a location of the object at a report time that occurs when the encoder count differs from the mark count by a delay count. 38. The medium of claim 37, further comprising instructions to send the value to a first-in first-out buffer. 39. The medium of claim 36, wherein the location of the object, at the report time, is a downstream position that is separated from the mark point by a distance determined by the delay count. 40. The medium of claim 39, further comprising a controller that adjusts the delay count so that the downstream position corresponds to a desired location of the object. 41. The medium of claim 40, wherein the downstream position corresponds to a location of an actuator. 42. The medium of claim 37, further comprising instructions to: judge, responsive to an analysis of the plurality of active frames, whether the object satisfies an inspection criterion; andsignal a reject actuator when the object does not satisfy the inspection criterion. 43. The medium of claim 40, wherein the controller comprises a human-machine interface. 44. The medium of claim 37, wherein computing the mark count further comprises fitting a line to at least a portion of the plurality of location values and at least a portion of the plurality of capture counts. 45. The medium of claim 37, computing the mark count further comprises interpolating a pair of capture counts and a corresponding pair of location values. 46. The medium of claim 37, wherein computing the mark count further comprises extrapolating from the capture counts and the location values. 47. The medium of claim 37, further comprising instructions to receive a succession of frames, each frame of the succession of frames comprising an image of the two-dimensional field of view, such that the plurality of active frames comprises a subset of the succession of frames; and further comprising instructions to: compute a plurality of object detection weights based on and corresponding respectively to the succession of frames, each object detection weight of the plurality of object detection weights comprising evidence that the object is located in the two-dimensional field of view for the corresponding frame; wherein the succession of frames comprise images recorded at least before and after the object is located in the two-dimensional field of view; andselect as the plurality of active frames a portion of the succession of frames where the evidence is sufficient. 48. The medium of claim 37, wherein the received frames are obtained by an imager with a global shutter. 49. The medium of claim 37, wherein the plurality of location values are computed responsive to at least one locator, the locator having a search range and oriented so that the search range is substantially parallel to a direction of motion of the object.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.