System and method for assuring high resolution imaging of distinctive characteristics of a moving object
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
H04N-009/47
H04N-009/44
H04N-007/18
출원번호
UP-0836075
(2004-04-30)
등록번호
US-7542588
(2009-07-01)
발명자
/ 주소
Ekin, Ahmet
Hampapur, Arun
Pankanti, Sharathchandra U.
출원인 / 주소
International Business Machines Corporation
대리인 / 주소
Yea & Associates, P.C.
인용정보
피인용 횟수 :
47인용 특허 :
7
초록▼
A system and method for assuring a high resolution image of an object, such as the face of a person, passing through a targeted space are provided. Both stationary and active or pan-tilt-zoom cameras are utilized. The at least one stationary camera acts as a trigger point such that when a person pas
A system and method for assuring a high resolution image of an object, such as the face of a person, passing through a targeted space are provided. Both stationary and active or pan-tilt-zoom cameras are utilized. The at least one stationary camera acts as a trigger point such that when a person passes through a predefined targeted area of the at least one stationary camera, the system is triggered for object imaging and tracking. Upon the occurrence of a triggering event in the system, the system predicts the motion and position of the person. Based on this predicted position of the person, an active camera that is capable of obtaining an image of the predicted position is selected and may be controlled to focus its image capture area on the predicted position of the person. After the active camera control and image capture processes, the system evaluates the quality of the captured face images and reports the result to the security agents and interacts with the user.
대표청구항▼
What is claimed is: 1. A method, in a data processing system, for obtaining an image of an object of interest, comprising: predefining a targeted area; pointing a stationary camera at said targeted area, said stationary camera remaining pointed at said targeted area during operation of said data pr
What is claimed is: 1. A method, in a data processing system, for obtaining an image of an object of interest, comprising: predefining a targeted area; pointing a stationary camera at said targeted area, said stationary camera remaining pointed at said targeted area during operation of said data processing system; generating a trigger in response to an object moving through said targeted area, wherein movement through said target area generates said trigger; in response to said trigger: using said stationary camera to determine a motion of said object through said targeted area, said motion determined using a first set of video images of the object that were captured by the stationary camera as the object moved through said targeted area; using said motion determined using said first set of images to determine motion parameters; using said motion parameters to predict a future position of said object at a future time that is greater than a movement time of an active camera, wherein said object will arrive at said future position at or after, but not before, said future time, and further wherein said movement time is the time it takes said active camera to move to a desired orientation; controlling said active camera to capture a second set of images of the object at the future position of the object; evaluating a quality measure of each image in the second set of images; storing an image from the second set of images if a quality measure of the image meets predetermined criteria; and wherein controlling said active camera to capture a second set of images of the object includes transmitting control signals to the active camera causing the active camera to be oriented to the future position prior to arrival of the object at the future position, the active camera being at rest when the capturing of the second set of images is initiated, wherein blurring of said second set of images due to movement of said active camera is reduced. 2. The method of claim 1, further comprising: providing the image from the second set of images to an external system for comparison to image data stored in the external system; determining if there is matching image data in the external system that matches the image from the second set of image; and correlating information associated with a matching image in the image data stored in the external system with the image from the second set of images. 3. The method of claim 2, further comprising: generating an alert based on correlating the information associated with the matching image in the image data stored in the external system with the image from the second set of images. 4. The method of claim 1, wherein evaluating a quality measure of each image in the second set of images includes performing a blur analysis of the pixels of the images in the second set of images. 5. The method of claim 4, wherein performing a blur analysis of the pixels of the images in the second set of images includes: obtaining values for the pixels in a first image from the second set of images; predicting values for pixels in a second image from the second set of images; and comparing the predicted values for the pixels in the second image to actual values for the pixels in the second image to determine if the first image meets predetermined quality requirements. 6. The method of claim 1, wherein using said motion parameters to predict a future position of said object includes determining said motion of the object based on differences between frames of images in the first set of images to identify a speed and direction of motion of the object. 7. The method of claim 1, further comprising: sending a message to an operator workstation informing the operator that corrective action is necessary if none of the images in the second set of images has a quality measure that meets the predetermined criteria. 8. A computer program product that is stored in a computer readable medium in a data processing system for obtaining an image of an object of interest, comprising: instructions for predefining a targeted area; instructions for pointing a stationary camera at said targeted area, said stationary camera remaining pointed at said targeted area during operation of said data processing system; instructions for generating a trigger in response to an object moving through said targeted area, wherein movement through said target area generates said trigger; in response to said trigger: instructions for using said stationary camera to determine a motion of said object through said targeted area, said motion determined using a first set of video images of the object that were captured by the stationary camera as the object moved through said targeted area; instructions for using said motion determined using said first set of images to determine motion parameters; instructions for using said motion parameters to predict a future position of said object at a future time that is greater than a movement time of an active camera, wherein said object will arrive at said future position at or after, but not before, said future time, and further wherein said movement time is the time it takes said active camera to move to a desired orientation; instructions for controlling said active camera to capture a second set of images of the object at the future position of the object; instructions for evaluating a quality measure of each image in the second set of images; instructions for storing an image from the second set of images if a quality measure of the image meets predetermined criteria; and wherein controlling said active camera to capture a second set of images of the object includes transmitting control signals to the active camera causing the active camera to be oriented to the future position prior to arrival of the object at the future position, the active camera being at rest when the capturing of the second set of images is initiated, wherein blurring of said second set of images due to movement of said active camera is reduced. 9. The computer program product of claim 8, wherein the instructions for evaluating a quality measure of each image in the second set of images include instructions for performing a blur analysis of the pixels of the images in the second set of images. 10. The computer program product of claim 9, wherein the instructions for performing a blur analysis of the pixels of the images in the second set of images include: instructions for obtaining values for the pixels in a first image from the second set of images; instructions for predicting values for pixels in a second image from the second set of images; and instructions for comparing the predicted values for the pixels in the second image to actual values for the pixels in the second image to determine if the first image meets predetermined quality requirements. 11. The computer program product of claim 8, wherein the instructions for using said motion parameters to predict a future position of said object include instructions for determining said motion of the object based on differences between frames of images in the first set of images to identify a speed and direction of motion of the object. 12. The computer program product of claim 8, further comprising: instructions for sending a message to an operator workstation informing the operator that corrective action is necessary if none of the images in the second set of images has a quality measure that meets the predetermined criteria. 13. A data processing system for obtaining an image of an object of interest, comprising: a predefined targeted area; a stationary camera that is pointed at said targeted area, said stationary camera remaining pointed at said targeted area during operation of said data processing system; means for generating a trigger in response to an object moving through said targeted area, wherein movement through said target area generates said trigger; in response to said trigger: means for using said stationary camera to determine a motion of said object through said targeted area, said motion determined using a first set of video images of the object that were captured by the stationary camera as the object moved through said targeted area; means for using said motion determined using said first set of images to determine motion parameters; means for using said motion parameters to predict a future position of said object at a future time that is greater than a movement time of an active camera, wherein said object will arrive at said future position at or after, but not before, said future time, and further wherein said movement time is the time it takes said active camera to move to a desired orientation means for controlling said active camera to capture a second set of images of the object at the future position of the object; means for evaluating a quality measure of each image in the second set of images; means for storing an image from the second set of images if a quality measure of the image meets predetermined criteria; and wherein controlling said active camera to capture a second set of images of the object includes transmitting control signals to the active camera causing the active camera to be oriented to the future position prior to arrival of the object at the future position, the active camera being at rest when the capturing of the second set of images is initiated, wherein blurring of said second set of images due to movement of said active camera is reduced. 14. The system of claim 13, further comprising: means for providing the image from the second set of images to an external system for comparison to image data stored in the external system; means for determining if there is matching image data in the external system that matches the image from the second set of image; and means for correlating information associated with a matching image in the image data stored in the external system with the image from the second set of images. 15. The system of claim 14, further comprising: means for generating an alert based on correlating the information associated with the matching image in the image data stored in the external system with the image from the second set of images. 16. The system of claim 13, wherein the means for evaluating a quality measure of each image in the second set of images includes means for performing a blur analysis of the pixels of the images in the second set of images. 17. The system of claim 16, wherein the means for performing a blur analysis of the pixels of the images in the second set of images includes: means for obtaining values for the pixels in a first image from the second set of images; means for predicting values for pixels in a second image from the second set of images; and means for comparing the predicted values for the pixels in the second image to actual values for the pixels in the second image to determine if the first image meets predetermined quality requirements. 18. The system of claim 13, wherein the means for using said motion parameters to predict a future position of said object includes means for determining said motion of the object based on differences between frames of images in the first set of images to identify a speed and direction of motion of the object. 19. The system of claim 13, further comprising: means for sending a message to an operator workstation informing the operator that corrective action is necessary if none of the images in the second set of images has a quality measure that meets the predetermined criteria.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (7)
Philip M. Anderson, III ; Hector Irizarry, License plate surveillance system.
Lyons, Damian M.; Cohen-Solal, Eric; Gutta, Srinivas; Colmenarez, Jr., Antonio, Method and apparatus to select the best video frame to transmit to a remote station for CCTV based residential security monitoring.
Bae, Ju-Han; Park, Sang-Ji; Chon, Je-Youl, Characterizing point checking region setting apparatus and method, and image stabilizing apparatus including the same.
Desimone, Michael J.; Hampapur, Arun; Lu, Zuoxuan; Mercier, Carl P.; Milite, Christopher S.; Russo, Stephen R.; Shu, Chiao-Fe; Tan, Chek K., Coding scheme for identifying spatial locations of events within video image data.
Desimone, Michael J.; Hampapur, Arun; Lu, Zuoxuan; Mercier, Carl P.; Milite, Christopher S.; Russo, Stephen R.; Shu, Chiao-Fe; Tan, Chek K., Coding scheme for identifying spatial locations of events within video image data.
Desimone, Michael J.; Hampapur, Arun; Lu, Zuoxuan; Milite, Christopher S.; Russo, Stephen R.; Shu, Chiao-Fe; Tan, Chek K., Coding scheme for identifying spatial locations of events within video image data.
Desimone, Michael J.; Hampapur, Arun; Lu, Zuoxuan; Mercier, Carl P.; Milite, Christopher S.; Russo, Stephen R.; Shu, Chiao-Fe; Tan, Chek K., Identifying spatial locations of events within video image data.
Desimone, Michael J.; Hampapur, Arun; Lu, Zuoxuan; Mercier, Carl P.; Milite, Christopher S.; Russo, Stephen R.; Shu, Chiao-Fe; Tan, Chek K., Identifying spatial locations of events within video image data.
Desimone, Michael J.; Hampapur, Arun; Lu, Zuoxuan; Mercier, Carl P.; Milite, Christopher S.; Russo, Stephen R.; Shu, Chiao-Fe; Tan, Chek K., Identifying spatial locations of events within video image data.
Desimone, Michael J.; Hampapur, Arun; Lu, Zuoxuan; Mercier, Carl P.; Milite, Christopher S.; Russo, Stephen R.; Shu, Chiao-Fe; Tan, Chek K., Identifying spatial locations of events within video image data.
Wilbert, Anthony Russell; Wach, Hans Brandon; Chung, David Ching-Chien, Method and apparatus for receiving a broadcast radio service offer from an image.
Wilbert, Anthony Russell; Wach, Hans Brandon; Chung, David Ching-Chien, Method and apparatus for receiving a broadcast radio service offer from an image.
Wilbert, Anthony Russell; Wach, Hans Brandon; Chung, David Ching-Chien, Method and apparatus for receiving a location of a vehicle service center from an image.
Wilbert, Anthony Russell; Wach, Hans Brandon; Chung, David Ching-Chien, Method and apparatus for receiving a location of a vehicle service center from an image.
Wilbert, Anthony Russell; Wach, Hans Brandon; Chung, David Ching-Chien, Method and apparatus for receiving vehicle information from an image and posting the vehicle information to a website.
Wilbert, Anthony Russell; Wach, Hans Brandon; Chung, David Ching-Chien, Method and apparatus for receiving vehicle information from an image and posting the vehicle information to a website.
Wilbert, Anthony Russell; Wach, Hans Brandon; Chung, David Ching-Chien, Method and apparatus for recovering a vehicle identification number from an image.
Wilbert, Anthony Russell; Wach, Hans Brandon; Chung, David Ching-Chien, Method and apparatus for recovering a vehicle identification number from an image.
Knutson, Christopher; Akcakir, Osman; Fu, Haojun, Methods and apparatuses for detection of positional freedom of particles in biological and chemical analyses and applications in immunodiagnostics.
An, Myung-seok, Surveillance camera system for controlling cameras using position and orientation of the cameras and position information of a detected object.
Wilbert, Anthony Russell; Chung, David Ching-Chien; Wach, Hans Brandon; Rauker, Goran Matko; White, Solomon John, System and method for electronic processing of vehicle transactions based on image detection of vehicle license plate.
Nizko, Henry J.; Woodington, W. Gordon; Manoogian, David V.; Russell, Mark E.; Toolin, Maurice J.; Walzer, Jonathan H., System and method for occupancy detection.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.