Wide-area site-based video surveillance system
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
H04N-005/225
출원번호
UP-0098579
(2005-04-05)
등록번호
US-7583815
(2009-09-16)
발명자
/ 주소
Zhang, Zhong
Yu, Li
Liu, Haiying
Brewer, Paul C.
Chosak, Andrew J.
Gupta, Himaanshu
Haering, Niels
Javed, Omar
Lipton, Alan J.
Rasheed, Zeeshan
Venetianer, Pèter L.
Yin, Weihong
Yu, Liangyin
출원인 / 주소
ObjectVideo Inc.
대리인 / 주소
Venable LLP
인용정보
피인용 횟수 :
38인용 특허 :
4
초록▼
A computer-readable medium contains software that, when read by a computer, causes the computer to perform a method for wide-area site-based surveillance. The method includes receiving surveillance data, including view targets, from a plurality of sensors at a site; synchronizing the surveillance da
A computer-readable medium contains software that, when read by a computer, causes the computer to perform a method for wide-area site-based surveillance. The method includes receiving surveillance data, including view targets, from a plurality of sensors at a site; synchronizing the surveillance data to a single time source; maintaining a site model of the site, wherein the site model comprises a site map, a human size map, and a sensor network model; analyzing the synchronized data using the site model to determine if the view targets represent a same physical object in the site; creating a map target corresponding to a physical object in the site, wherein the map target includes at least one view target; receiving a user-defined global event of interest, wherein the user-defined global event of interest is based on the site map and based on a set of rules; detecting the user-defined global event of interest in real time based on a behavior of the map target; and responding to the detected event of interest according to a user-defined response to the user-defined global event of interest.
대표청구항▼
What is claimed is: 1. A computer-readable medium containing software that, when read by a computer, causes the computer to perform a method for wide-area site-based surveillance, the method comprising: receiving surveillance data, including view targets, from a plurality of sensors at a site; sync
What is claimed is: 1. A computer-readable medium containing software that, when read by a computer, causes the computer to perform a method for wide-area site-based surveillance, the method comprising: receiving surveillance data, including view targets, from a plurality of sensors at a site; synchronizing said surveillance data to a single time source; maintaining a site model of the site, wherein said site model comprises a site map, a human size map, and a sensor network model; analyzing said synchronized data using said site model to determine if said view targets represent a same physical object in the site; creating a map target corresponding to a physical object in the site, wherein said map target includes at least one view target; receiving a user-defined global event of interest, wherein said user-defined global event of interest is based on said site map and based on a set of rules; detecting said user-defined global event of interest in real time based on a behavior of said map target; and responding to said detected event of interest according to a user-defined response to said user-defined global event of interest, wherein said analyzing further comprises: updating existing view targets with new size, location and appearance information; determining if a new view target corresponds to an existing map target by comparing location information; and comparing appearances, wherein each view target includes an appearance model that includes a distributed intensity histogram, and wherein comparing appearances comprises: determining an average correlation between said distributed intensity histograms for each of said view targets and said map targets; merging said new view target into said existing map target, if said new view target corresponds to said existing map target, and updating said existing map target with said new view target; producing a new map target corresponding to said new view target, if said new view target does not correspond to said existing map target; and determining if two map targets correspond to the same physical object. 2. The computer-readable medium of claim 1, wherein said maintaining a site model comprises: calibrating a sensor to said site map; providing a site map location of each view target; providing an actual size and a velocity of a target; and providing an object traffic model of the site. 3. The computer-readable medium of claim 1, wherein the method further comprises receiving surveillance data from a fusion sensor. 4. The computer-readable medium of claim 1, wherein at least one of said plurality of sensors monitors a different location in the site than the remaining sensors. 5. The computer-readable medium of claim 1, wherein said analyzing comprises at least one of: determining if a first view target from a first sensor at a first time represents the same physical object as a second view target from said first sensor at a second time; or determining if said first view target from said first sensor at said first time represents the same physical object as a third view target from a second sensor at said first time. 6. The computer-readable medium of claim 1, wherein updating said existing map target comprises updating a map location, a velocity, a classification type and a stability status of said map target. 7. The computer-readable medium of claim 1, wherein said synchronizing said surveillance data to a single time source comprises: comparing a time stamp applied to said surveillance data from one sensor to said single time source; discarding said surveillance data from said one sensor when said time stamp and said single time source are different by more than a specified system-allowed latency; and ordering chronologically said surveillance data that is not discarded. 8. A computer system comprising the computer-readable medium of claim 1. 9. A computer-readable medium containing software that, when read by a computer, causes the computer to perform a method for wide-area site-based surveillance, the software comprising: a data receiver module, adapted to receive and synchronize surveillance data, including view targets, from a plurality of sensors at a site; and a data fusion engine, adapted to receive said synchronized data, wherein said data fusion engine comprises: a site model manager, comprising: a map-based calibrator, adapted to calibrate a sensor view to said site map and store said calibration in a map-view mapping; a view-based calibrator, adapted to calibrate a view to an expected average human size and store said calibration in said human size map; and a camera network model manager, adapted to create and store said sensor network model, wherein said site model manager is adapted to maintain a site model, and wherein said site model comprises a site map, a human size map, and a sensor network model; a target fusion engine, adapted to analyze said synchronized data using said site model to determine if said view targets represent a same physical object in the site, and create a map target corresponding to a physical object in the site, wherein said map target comprises at least one view target; and an event detect and response engine, adapted to detect an event of interest based on a behavior of said map target, wherein said map-based calibrator includes a static camera calibrator, a point-tilt-zoom (PTZ) camera calibrator, and an omni-camera calibrator, and wherein said PTZ camera calibrator is adapted to: (a) estimate a homograph using a set of control points from said site map; (b) estimate an effective field of view for each sensor from said homograph; (c) estimate initial PTZ camera parameters, including at least one of camera map location, camera height, pan, tilt, roll, zoom, or relative focal length compared to image size; (d) refine said camera parameters such that said camera parameters are consistent with said homograph; (e) produce a new set of control points; and (f) repeat steps (a) through (e) until an acceptable error based on said control points is achieved. 10. The computer-readable medium of claim 9, wherein said static camera calibrator is adapted to: calibrate a ground plane in a video frame to a ground on said site map, using at least one control point; map a view for each of said plurality of sensors to said site map using a homograph estimation; and estimate an effective field of view for each of said plurality of sensors using said human size map. 11. The computer-readable medium of claim 9, wherein at least one of said plurality of sensors monitors a different location at the site than the remaining sensors. 12. The computer-readable medium of claim 9, wherein said site map comprises one of an aerial photograph, a computer graphical drawing, a blueprint, a photograph, or a video frame. 13. The computer-readable medium of claim 12, wherein said site map comprises a plurality of control points. 14. The computer-readable medium of claim 9, wherein said sensor network model comprises a set of entry/exit points for each sensor field of view, and a set of possible paths between said entry/exit points. 15. The computer-readable medium of claim 9 wherein said event detect and response engine is adapted to zoom a first pan-tilt-zoom (PTZ) camera in to a view target and to follow said view target until said view target leaves a field of view for said first PTZ camera. 16. The computer-readable medium of claim 15, wherein said event detect and response engine is further adapted to direct a second PTZ camera to follow said view target when said view target leaves said field of view for said first PTZ camera and enters a field of view for said second PTZ camera. 17. The computer-readable medium of claim 9, wherein said data fusion engine is adapted to receive a user-defined global event of interest, wherein said user-defined global event of interest is based on said site map. 18. A first fusion sensor comprising the computer-readable medium of claim 9, said first fusion sensor producing surveillance data. 19. A second fusion sensor adapted to receive said surveillance data from said first fusion sensor of claim 18. 20. The second fusion sensor of claim 19, further adapted to receive and synchronize surveillance data, including view targets, from another plurality of sensors. 21. A computer-readable medium containing software that, when read by a computer, causes the computer to perform a method for wide-area site-based surveillance, the software comprising: a data receiver module, adapted to receive and synchronize surveillance data, including view targets, from a plurality of sensors at a site; and a data fusion engine, adapted to receive said synchronized data, wherein said data fusion engine comprises: a site model manager, adapted to maintain a site model, wherein said site model comprises a site map, a human size map, and a sensor network model; a target fusion engine, adapted to analyze said synchronized data using said site model to determine if said view targets represent a same physical object in the site, and create a map target corresponding to a physical object in the site, wherein said map target comprises at least one view target; and an event detect and response engine, adapted to detect an event of interest based on a behavior of said map target, wherein said human size map comprises a data structure based on a frame size that provides, at each image position in said frame, an expected average human image height and image area, and wherein when a camera-to-map calibration is not available, a view-based calibrator is adapted to construct said data structure by: detecting and tracking a potential human object in a view over a time period; when said potential human object satisfies a human head model and a human shape model for a specified duration, updating a human size statistic data structure with a size of said potential human object, wherein each section of said human size statistic data structure corresponds to a section of said view, and represents the average size of a human detected in said section of said view; and for a section in said human size statistic data structure with insufficient data, interpolating values from surrounding sections to determine an average for said section in said table with insufficient data.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (4)
Soumitra Sengupta ; Damian Lyons ; Thomas Murphy ; Daniel Reese, Automated camera handoff system for figure tracking in a multiple camera system.
Greiffenhagen,Michael; Ramesh,Visvanathan; Comaniciu,Dorin, Statistical modeling and performance characterization of a real-time dual camera surveillance system.
Elangovan, Vidya; Milnes, Kenneth A.; Heidmann, Timothy P., Detecting an object in an image using camera registration data indexed to location or camera sensors.
Tojo, Hiroshi, Image pickup apparatus and method for detecting an entrance or exit event of an object in a frame image and medium storing a program causing a computer to function as the apparatus.
Tojo, Hiroshi, Information processing apparatus with display control unit configured to display on a display apparatus a frame image, and corresponding information processing method, and medium.
Au, KwongWing; Curtner, Keith L.; Bedros, Saad J., Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing.
Kniffen, Stacy K.; Gibbs, Daniel P.; Bailey, Weldon T.; Hillebrand, Mark J.; Erbert, Stephen R., Moving object detection, tracking, and displaying systems.
An, Myung-seok, Surveillance camera system for controlling cameras using position and orientation of the cameras and position information of a detected object.
Gibbs, Daniel P.; Kniffen, Stacy K.; Bailey, Weldon T.; Dean, Jordan S.; Becker, Michael F., System for extending a field-of-view of an image acquisition device.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.