Method and system for optoelectronic detection and location of objects
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H01L-027/00
G06K-009/00
출원번호
US-0763752
(2007-06-15)
등록번호
US-8237099
(2012-08-07)
발명자
/ 주소
Silver, William M.
출원인 / 주소
Cognex Corporation
인용정보
피인용 횟수 :
2인용 특허 :
84
초록▼
Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that a
Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.
대표청구항▼
1. An optoelectronic system for providing information describing an object, comprising: an optical sensor that makes light measurements in a field of view; means for performing a motion process that provides relative motion between the object and the field of view in a direction of motion, such that
1. An optoelectronic system for providing information describing an object, comprising: an optical sensor that makes light measurements in a field of view; means for performing a motion process that provides relative motion between the object and the field of view in a direction of motion, such that the object passes through the field of view;means for performing a capture process that captures a plurality of one-dimensional images of the field of view, wherein each of the plurality of images are responsive to the respective light measurements;the images are oriented approximately parallel to the direction of motion; andat least a portion of the images correspond to a plurality of positions of the object relative to the field of view; andmeans for performing a process that analyzes at least a portion of the images so as to provide the information describing the object. 2. The system of claim 1, wherein the information describing the object comprises knowledge responsive to object presence in the field of view. 3. The system of claim 1, wherein the motion process is such that the object crosses a reference point at a reference time; andthe information describing the object comprises an estimate of the reference time. 4. The system of claim 3, further comprising an electronic human-machine interface, and wherein the reference point is adjusted in response to the human-machine interface. 5. The system of claim 3, further comprising a signaling process that produces a signal that indicates the estimate of the reference time by occurring at a signal time that is responsive to the estimate of the reference time. 6. The system of claim 5, further comprising an electronic human-machine interface, and wherein the signal time is adjusted in response to the human-machine interface. 7. The system of claim 5, wherein there is no latency between the reference time and the signal time. 8. The system of claim 3, wherein the estimate of the reference time is determined by predicting the reference time. 9. The system of claim 3, further comprising an example object; anda setup signal that indicates that the example object is placed in the field of view at a setup position; andwherein the reference point is set responsive to the example object, the setup signal, and the setup position. 10. The system of claim 9, further comprising a signaling process that produces a signal responsive to the information describing the object, and wherein the signal occurs at a time when the object crosses the setup position. 11. The system of claim 9, further comprising a signaling process that produces a signal responsive to the information describing the object, and wherein the signal occurs at a time when the object is upstream from the setup position. 12. The system of claim 3, further comprising a test object; anda human-machine interface comprising an indicator that indicates a detection state chosen from among the test object is detected upstream from the reference point;the test object is detected downstream from the reference point; andthe test object is not detected. 13. The system as in any one of claims 1-3, wherein the optical sensor comprises a linear optical sensor comprising a one-dimensional array of photoreceptors. 14. An optoelectronic system for providing information describing an object, comprising: an optical sensor that makes light measurements in a field of view; a motion element that provides relative motion between the object and the field of view in a direction of motion, such that the object passes through the field of view;a capture element that captures a plurality of one-dimensional images of the field of view, wherein each of the plurality of images are responsive to the respective light measurements;the images are oriented approximately parallel to the direction of motion; andat least a portion of the images correspond to a plurality of positions of the object relative to the field of view;a measurement element that analyzes at least a portion of the images so as to produce image measurements;a selection element that selects object measurements from among the image measurements, the object measurements comprising image measurements that are judged responsive to the object; anda decision element that analyzes the object measurements so as to produce the information describing the object. 15. The system of claim 14, wherein at least a portion of the image measurements are responsive to object presence in the field of view;the selection element is responsive to at least a portion of the image measurements that are responsive to object presence in the field of view; andthe information describing the object comprises knowledge responsive to object presence in the field of view. 16. The system of claim 14, wherein the motion element is such that the object crosses a reference point at a reference time;at least a portion of the image measurements are responsive to object position in the field of view;at least a portion of the object measurements are selected from among the image measurements that are responsive to object position in the field of view;the decision element is responsive to at least a portion of the object measurements that are selected from among the image measurements that are responsive to object position in the field of view; andthe information describing the object comprises an estimate of the reference time. 17. The system of claim 16, wherein the capture element process produces a plurality of capture times, each capture time being responsive to a time at which a corresponding image was captured; andthe decision element is responsive to at least one of the capture times. 18. The system of claim 17, wherein the decision element comprises fitting a curve to points comprising at least a portion of the capture times and corresponding object measurements that are selected from among the image measurements that are responsive to object position in the field of view. 19. The system of claim 16, further comprising an electronic human-machine interface, and wherein the reference point is adjusted in response to the human-machine interface. 20. The system of claim 16, further comprising a signaling element that produces a signal responsive to the information describing the object, and wherein the signal indicates the information describing the object by occurring at a signal time that is responsive to the estimate of the reference time. 21. The system of claim 20, further comprising an electronic human-machine interface, and wherein the signal time is adjusted in response to the human-machine interface. 22. The system of claim 20, wherein there is no latency between the reference time and the signal time. 23. The system of claim 16, wherein the decision element determines the estimate of the reference time by predicting the reference time. 24. The system of claim 16, further comprising an example object; anda setup signal that indicates that the example object is placed in the field of view at a setup position; andwherein the reference point is set responsive to the example object, the setup signal, and the setup position. 25. The system of claim 24, further comprising a signaling element that produces a signal responsive to the information describing the object, and wherein the signal occurs at a time when the object crosses the setup position. 26. The system of claim 24, further comprising a signaling element that produces a signal responsive to the information describing the object, and wherein the signal occurs at a time when the object is upstream from the setup position. 27. The system of claim 16, further comprising a test object; anda human-machine interface comprising an indicator that indicates a detection state chosen from among the test object is detected upstream from the reference point;the test object is detected downstream from the reference point; andthe test object is not detected. 28. The system as in any one of claims 14-16, wherein the measurement element comprises a pattern detection element. 29. The system of claim 28, further comprising an example object; anda training element that obtains a model pattern responsive to the example object; andwherein the pattern detection element is responsive to the model pattern. 30. The system of claim 28, wherein the pattern detection element comprises a normalized correlation element. 31. The system as in any one of claims 14-16, wherein the optical sensor comprises a linear optical sensor comprising a one-dimensional array of photo receptors. 32. An optoelectronic system for detecting and locating an object, comprising: a linear optical sensor comprising a linear array of photoreceptors that makes light measurements in a field of view; means for performing a motion process that provides relative motion between the object and the field of view in a direction of motion, such that the object passes through the field of view and crosses a reference point at a reference time;means for performing a capture process that captures a plurality of one-dimensional images of the field of view, and produces a plurality of capture times, each capture time being responsive to a time at which a corresponding image was captured, and wherein each of the plurality of images are responsive to the respective light measurements;the images are oriented approximately parallel to the direction of motion; andat least a portion of the images correspond to a plurality of positions of the object relative to the field of view;means for performing a measurement process that analyzes at least a portion of the images so as to produce position measurements and score measurements;means for performing a decision process that uses the capture times, the position measurements, and the score measurements to produce an estimate of the reference time; andmeans for performing a signaling process that produces a signal that indicates the reference time by occurring at a signal time that is responsive to the estimate of the reference time. 33. The system of claim 32, wherein the decision process estimates the reference time by predicting the reference time. 34. The system of claim 32, wherein there is no latency between the reference time and the signal time. 35. An optoelectronic method for providing information describing an object, comprising: using an optical sensor to make light measurements in a field of view;providing relative motion between the object and the optical sensor in a direction of motion, such that the object passes through the field of view;capturing a plurality of one-dimensional images of the field of view, wherein each of the plurality of images are responsive to the respective light measurements;the images are oriented approximately parallel to the direction of motion; andat least a portion of the images correspond to a plurality of positions of the object relative to the field of view; andanalyzing at least a portion of the images so as to provide the information describing the object. 36. The method of claim 35, wherein the information describing the object comprises knowledge responsive to object presence in the field of view. 37. The method of claim 35, wherein the relative motion is such that the object crosses a reference point at a reference time; andthe information describing the object comprises an estimate of the reference time. 38. The method of claim 37, further comprising using an electronic human-machine interface to adjust the reference point. 39. The method of claim 37, further comprising producing a signal that indicates the estimate of the reference time by occurring at a signal time that is responsive to the estimate of the reference time. 40. The method of claim 39, further comprising using an electronic human-machine interface to adjust the signal time. 41. The method of claim 39, wherein there is no latency between the reference time and the signal time. 42. The method of claim 37, wherein the analyzing step comprises predicting the reference time. 43. The method of claim 37, further comprising providing an example object;providing a setup signal that indicates that the example object is placed in the field of view at a setup position; andsetting the reference point in response to the example object, the setup signal, and the setup position. 44. The method of claim 43, further comprising producing a signal responsive to the information describing the object, wherein the signal occurs at a time when the object crosses the setup position. 45. The method of claim 43, further comprising producing a signal responsive to the information describing the object, and wherein the signal occurs at a time when the object is upstream from the setup position. 46. The method of claim 37, further comprising providing a test object; andusing a human-machine interface to indicate a detection state chosen from among the test object is detected upstream from the reference point;the test object is detected downstream from the reference point; andthe test object is not detected. 47. The system as in any one of claims 35-37, wherein the optical sensor comprises a linear optical sensor comprising a one-dimensional array of photoreceptors. 48. An optoelectronic method for providing information describing an object, comprising: using an optical sensor to make light measurements in a field of view;providing relative motion between the object and the optical sensor in a direction of motion, such that the object passes through the field of view;capturing a plurality of one-dimensional images of the field of view, wherein each of the plurality of images are responsive to the respective light measurements;the images are oriented approximately parallel to the direction of motion; andat least a portion of the images correspond to a plurality of positions of the object relative to the field of view;analyzing at least a portion of the images so as to produce image measurements;selecting object measurements from among the image measurements, the object measurements comprising image measurements that are judged responsive to the object; andanalyzing the object measurements so as to produce the information describing the object. 49. The method of claim 48, wherein at least a portion of the image measurements are responsive to object presence in the field of view;the selecting step is responsive to at least a portion of the image measurements that are responsive to object presence in the field of view; andthe information describing the object comprises knowledge responsive to object presence in the field of view. 50. The method of claim 48, wherein the relative motion is such that the object crosses a reference point at a reference time;at least a portion of the image measurements are responsive to object position in the field of view;at least a portion of the object measurements are selected from among the image measurements that are responsive to object position in the field of view;the step of analyzing the object measurements is responsive to at least a portion of the object measurements that are selected from among the image measurements that are responsive to object position in the field of view; andthe information describing the object comprises an estimate of the reference time. 51. The method of claim 50, further comprising producing a plurality of capture times, each capture time being responsive to a time at which a corresponding image was captured; andwherein the step of analyzing the object measurements is further responsive to at least one of the capture times. 52. The method of claim 51, wherein the step of analyzing the object measurements comprises fitting a curve to points comprising at least a portion of the capture times and corresponding object measurements that are selected from among the image measurements that are responsive to object position in the field of view. 53. The method of claim 50, further comprising using an electronic human-machine interface to adjust the reference point. 54. The method of claim 50, further comprising producing a signal that indicates the estimate of the reference time by occurring at a signal time that is responsive to the estimate of the reference time. 55. The method of claim 54, further comprising using an electronic human-machine interface to adjust the signal time relative to the estimate of the reference time. 56. The method of claim 54, wherein there is no latency between the reference time and the signal time. 57. The method of claim 50, wherein the estimate of the reference time is determined by predicting the reference time. 58. The method of claim 50, further comprising providing an example object;providing a setup signal that indicates that the example object is placed in the field of view at a setup position; andsetting the reference point in response to the example object, the setup signal, and the setup position. 59. The method of claim 58, further comprising producing a signal responsive to the estimate of the reference time, wherein the signal occurs at a time when the object crosses the setup position. 60. The method of claim 58, further comprising producing a signal responsive to the estimate of the reference time, and wherein the signal occurs at a time when the object is upstream from the setup position. 61. The method of claim 50, further comprising providing a test object; andusing a human-machine interface to indicate a detection state chosen from among the test object is detected upstream from the reference point;the test object is detected downstream from the reference point; andthe test object is not detected. 62. The method as in any one of claims 48-50, wherein the step of analyzing at least a portion of the plurality of images comprises performing pattern detection. 63. The method of claim 62, further comprising providing an example object; andobtaining a model pattern responsive to the example object; andwherein the pattern detection is responsive to the model pattern. 64. The method of claim 62, wherein the pattern detection step comprises performing normalized correlation. 65. The method as in any one of claims 48-50, wherein the optical sensor comprises a linear optical sensor comprising a one-dimensional array of photoreceptors. 66. An optoelectronic method for detecting and locating an object, comprising: using a linear optical sensor to make light measurements in a field of view;providing relative motion between the object and the optical sensor in a direction of motion, such that the object passes through the field of view and crosses a reference point at a reference time;capturing a plurality of one-dimensional images of the field of view, wherein each of the plurality of images are responsive to the respective light measurements;the images are oriented approximately parallel to the direction of motion; andat least a portion of the images correspond to a plurality of positions of the object relative to the field of view;producing a plurality of capture times, each capture time being responsive to a time at which a corresponding image was captured;analyzing at least a portion of the plurality of images so as to produce position measurements and score measurements;using the capture times, the position measurements, and the score measurements to produce an estimate of the reference time; andproducing a signal that indicates the reference time by occurring at a signal time that is responsive to the estimate of the reference time. 67. The method of claim 66, wherein the step of using the capture times, the position measurements, and the score measurements to produce an estimate of the reference time comprises predicting the reference time. 68. The method of claim 66, wherein there is no latency between the reference time and the signal time.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (84)
Hansen, Michael Wade; Burt, Peter Jeffrey, Apparatus and a method for detecting motion within an image sequence.
Suzuki Masato (Ibaraki JPX) Inaba Hiromi (Katsuta JPX) Nakamura Kiyoshi (Katsuta JPX) Nakata Naofumi (Katsuta JPX) Yamani Hiroaki (Katsuta JPX) Oonuma Naoto (Hitachi JPX), Apparatus and methods for detecting number of people waiting in an elevator hall using plural image processing means wit.
Tsujino,Hiroshi; Kondo,Hiroshi; Miura,Atsushi; Nagai,Shinichi; Akatsuka,Koji, Apparatus, program and method for detecting both stationary objects and moving objects in an image using optical flow.
Eleftheriadis Alexandros ; Jacquin Arnaud Eric, Automatic face and facial feature location detection for low bit rate model-assisted H.261 compatible coding of video.
Corwin Thomas L. (McLean VA) Richardson Henry R. (Alexandria VA) Kuo Stanley D. (Arlington VA) Stefanick Tom A. (Arlington VA) Keeler R. Norris (McLean VA) Pflibsen Kent (Tucson AZ) Calmes Lonnie K. , Automatic target detection process.
Pfeiffer Carl G. ; Tsai Cheng-Chih ; Gumas D. Spyro ; Calingaert Christopher ; Nguyen Danny D., Background adaptive target detection and tracking with multiple observation and processing stages.
Vezzalini Alessandro,ITX ; Landolfi Marco,ITX ; Brandstetter Reiner,DEX, Electro-optical device for detecting the presence of a body at an adjustable distance, with background suppression.
Baharav,Izhak; Blalock,Travis N.; Machida,Akihiro; Smith,George E.; Ang,Jin Kiong, Imaging system and apparatus for combining finger recognition and finger navigation.
Douglas James Beck ; Clarence Keith Griggs ; Jeffrey Erickson Roeca ; Jeffrey John Haeffele ; Mason Bradfield Samuels, Integrated trigger function display system and methodology for trigger definition development in a signal measurement system having a graphical user interface.
Scola Joseph R. ; Ruzhitsky Vladimir N. ; Jacobson Lowell D., Machine vision system for object feature analysis and validation based on multiple object images.
Silver William M. (Medfield MA) Druker Samuel (Brookline MA) Romanik Philip (West Haven CT) Arbogast Carroll (Needham MA), Method and apparatus for interactively generating a computer program for machine vision analysis of an object.
White Stanley A. ; Walley Kenneth S. ; Johnston James W. ; Henderson P. Michael ; Hale Kelly H. ; Andrews ; Jr. Warner B. ; Siann Jonathan I., Method and apparatus for sensing an audio signal that is sensitive to the audio signal and insensitive to background noise.
Schneider Volker Rainer,DEX ; Braach Hans-Joachim,DEX, Method and device for the automatic detection of surface defects for continuously cast products with continuous mechanical removal of the material.
Michael David J. ; Wallack Aaron, Nonfeedback-based machine vision method for determining a calibration relationship between a camera and a moveable obje.
Glier, Michael T.; Laird, Mark D.; Tinnemeier, Michael T.; Small, Steven I.; Sybel, Randall T., Traffic light violation prediction and recording system.
Eskridge, Thomas C.; Newberry, Jeff E.; DeYong, Mark R.; Dunn, Scott A.; Huffstutter, Wesley K.; Grace, John W.; Lumeyer, Marc A.; Ellison, Michael A.; Zoch, John R., User interface for automated optical inspection systems.
Ekchian Leon K. (Northridge CA) Johnson David D. (Simi Valley CA) Smith William F. (Los Angeles CA), Vector neural network for low signal-to-noise ratio detection of a target.
Michalopoulos Panos G. (St. Paul MN) Fundakowski Richard A. (St. Paul MN) Geokezas Meletios (White Bear Lake MN) Fitch Robert C. (Roseville MN), Vehicle detection through image processing for traffic surveillance and control.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.