System for surveillance by integrating radar with a panoramic staring sensor
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G01S-013/86
H04N-007/18
H04N-005/232
G06T-007/00
출원번호
US-0502944
(2014-09-30)
등록번호
US-9778351
(2017-10-03)
발명자
/ 주소
Khosla, Deepak
Huber, David J.
Chen, Yang
VanBuer, Darrel J.
Martin, Kevin R.
출원인 / 주소
HRL Laboratories, LLC
대리인 / 주소
Tope-McKay & Associates
인용정보
피인용 횟수 :
1인용 특허 :
12
초록▼
Described is system for surveillance that integrates radar with a panoramic staring sensor. The system captures image frames of a field-of-view of a scene using a multi-camera panoramic staring sensor. The field-of-view is scanned with a radar sensor to detect an object of interest. A radar detectio
Described is system for surveillance that integrates radar with a panoramic staring sensor. The system captures image frames of a field-of-view of a scene using a multi-camera panoramic staring sensor. The field-of-view is scanned with a radar sensor to detect an object of interest. A radar detection is received when the radar sensor detects the object of interest. A radar message indicating the presence of the object of interest is generated. Each image frame is marked with a timestamp. The image frames are stored in a frame storage database. The set of radar-based coordinates from the radar message is converted into a set of multi-camera panoramic sensor coordinates. A video clip comprising a sequence of image frames corresponding in time to the radar message is created. Finally, the video clip is displayed, showing the object of interest.
대표청구항▼
1. A system for surveillance, the system comprising: one or more processors and a non-transitory computer-readable medium having executable instructions encoded thereon such that when executed, the one or more processors perform operations of:storing a set of image frames of a field-of-view of a sce
1. A system for surveillance, the system comprising: one or more processors and a non-transitory computer-readable medium having executable instructions encoded thereon such that when executed, the one or more processors perform operations of:storing a set of image frames of a field-of-view of a scene captured using a multi-camera panoramic staring sensor in a frame storage database, wherein each image frame is marked with a time of image frame capture;generating a radar detection when a radar sensor detects an object of interest in the field-of-view of the scene;based on the radar detection, generating a radar message, marked with a time of radar detection, indicating the presence of the object of interest;for each radar detection, converting a set of radar coordinates corresponding to the radar detection into a set of multi-camera panoramic sensor coordinates;creating a video clip comprising a sequence of image frames in the set of image frames, wherein the times of image frame capture for the sequence of image frames correspond to the times of radar detections; anddisplaying the video clip, wherein the video clip displays the object of interest. 2. The system as set forth in claim 1, wherein the one or more processors further perform operations of: comparing, with an active cognitive processor module, each image frame in the set of image frames to a background model;detecting, with the active cognitive processor module, at least one cognitive detection in an image frame, wherein the at least one cognitive detection corresponds to a region of the scene that deviates from the background model and represents the object of interest;assigning, with the active cognitive processor module, a cognitive score and a bounding box to each cognitive detection to aid in user analysis, wherein a higher cognitive score corresponds to a greater deviation from the background model, and the bounding box surrounds the object of interest; andstoring the cognitive detections having the highest cognitive scores in the frame storage database. 3. The system as set forth in claim 2, wherein the one or more processors further perform operations of managing cognitive detections according to the following: requesting a list of cognitive detections having the highest cognitive scores from the active cognitive processor module;for each cognitive detection in the list, requesting a sequence of image frames comprising the image frame corresponding to the cognitive detection and a plurality of image frames before and after the image frame corresponding to the cognitive detection from the capture and recording module;for each cognitive detection in the list, constructing a video sequence corresponding to the time of the cognitive detection from the sequence of image frames; andfor each cognitive detection in the list, sending the video sequence to the user interface for user analysis. 4. The system as set forth in claim 3, wherein the one or more processors further perform an operation of retrieving from the active cognitive processor module a cognitive score for a region of a field-of-view of a scene in which a radar detection originated. 5. The system as set forth in claim 4, wherein the one or more processors further perform an operation of detecting, in parallel, objects of interest with both the active cognitive processor module and the radar sensor independently. 6. The system as set forth in claim 2, wherein the one or more processors further performs operations of: using the bounding box to perform a classification of the cognitive detection in a classification module using object recognition;applying a tracker to the bounding box, and tracking the bounding box across image frames using a tracking module,wherein a user can utilize the tracker to switch between at least one image frame in the video clip corresponding to a radar detection to a current location of the object of interest. 7. The system as set forth in claim 1, wherein the one or more processors further perform an operation of forwarding the video clip to a reactive cognitive processor module, wherein the reactive cognitive processor module performs operations of: comparing the image frames in the video clip to a background model;detecting at least one cognitive detection in at least one image frame in the video clip, wherein the cognitive detection corresponds to a region of the scene that deviates from the background model and represents the object of interest; andassigning a cognitive score and a bounding box to each cognitive detection to aid in user analysis, wherein a higher cognitive score corresponds to a greater deviation from the background model, and the bounding box surrounds the object of interest. 8. The system as set forth in claim 1, wherein the one or more processors further perform operations of: using a plurality of multi-camera panoramic staring sensors to continuously capture the set of image frames of the field-of-view of the scene; andusing a plurality of radar sensors to detect the object of interest to enable the system to scale up the field-of-view to any predetermined value up to a 360-degree field-of-view. 9. A computer-implemented method for surveillance, comprising an act of: causing one or more processors to execute instructions stored on a non-transitory memory such that upon execution, the one or more processors performs operations of: storing a set of image frames of a field-of-view of a scene captured using a multi-camera panoramic staring sensor in a frame storage database, wherein each image frame is marked with a time of image frame capture;generating a radar detection when a radar sensor detects an object of interest in the field-of-view of the scene;based on the radar detection, generating a radar message, marked with a time of radar detection, indicating the presence of the object of interest;for each radar detection, converting a set of radar coordinates corresponding to the radar detection into a set of multi-camera panoramic sensor coordinates;creating a video clip comprising a sequence of image frames in the set of image frames, wherein the times of image frame capture for the sequence of image frames correspond to the times of radar detections; anddisplaying the video clip, wherein the video clip displays the object of interest. 10. The method as set forth in claim 9, wherein the one or more processors further performs operations of: comparing, with an active cognitive processor module, each image frame in the set of image frames to a background model;detecting, with the active cognitive processor module, at least one cognitive detection in an image frame, wherein the at least one cognitive detection corresponds to a region of the scene that deviates from the background model and represents the object of interest;assigning, with the active cognitive processor module, a cognitive score and a bounding box to each cognitive detection to aid in user analysis, wherein a higher cognitive score corresponds to a greater deviation from the background model, and the bounding box surrounds the object of interest; andstoring the cognitive detections having the highest cognitive scores in the frame storage database. 11. The method as set forth in claim 10, wherein the one or more processors further perform an operation of managing cognitive detections according to the following: requesting a list of cognitive detections having the highest cognitive scores from the active cognitive processor module;for each cognitive detection in the list, requesting a sequence of image frames comprising the image frame corresponding to the cognitive detection and a plurality of image frames before and after the image frame corresponding to the cognitive detection from the capture and recording module;for each cognitive detection in the list, constructing a video sequence corresponding to the time of the cognitive detection from the sequence of image frames; andfor each cognitive detection in the list, sending the video sequence to the user interface for user analysis. 12. The method as set forth in claim 11, wherein the data processor further performs an operation of retrieving from the active cognitive processor module a cognitive score for a region of a field-of-view of a scene in which a radar detection originated. 13. The method as set forth in claim 12, wherein the data processor further performs an operation of detecting, in parallel, objects of interest with both the active cognitive processor module and the radar sensor independently. 14. The method as set forth in claim 10, wherein the data processor further performs operations of: using the bounding box to perform a classification of the cognitive detection in a classification module using object recognition;applying a tracker to the bounding box, and tracking the bounding box across image frames using a tracking module,wherein a user can utilize the tracker to switch between at least one image frame in the video clip corresponding to a radar detection to a current location of the object of interest. 15. The method as set forth in claim 9, wherein the one or more processors further perform an operation of forwarding the video clip to a reactive cognitive processor module, wherein the reactive cognitive processor module performs operations of: comparing the image frames in the video clip to a background model;detecting at least one cognitive detection in at least one image frame in the video clip, wherein the cognitive detection corresponds to a region of the scene that deviates from the background model and represents the object of interest; andassigning a cognitive score and a bounding box to each cognitive detection to aid in user analysis, wherein a higher cognitive score corresponds to a greater deviation from the background model, and the bounding box surrounds the object of interest. 16. The method as set forth in claim 9, wherein the one or more processors further performs operations of: using a plurality of multi-camera panoramic staring sensors to continuously capture the set of image frames of the field-of-view of the scene; andusing a plurality of radar sensors to detect the object of interest to enable the system to scale up the field-of-view to any predetermined value up to a 360-degree field-of-view. 17. A computer program product for surveillance, the computer program product comprising computer-readable instructions stored on a non-transitory computer-readable medium that are executable by a computer having a processor for causing the processor to perform operations of: storing a set of image frames of a field-of-view of a scene captured using a multi-camera panoramic staring sensor in a frame storage database, wherein each image frame is marked with a time of image frame capture;generating a radar detection when a radar sensor detects an object of interest in the field-of-view of the scene;based on the radar detection, generating a radar message, marked with a time of radar detection, indicating the presence of the object of interest;for each radar detection, converting a set of radar coordinates corresponding to the radar detection into a set of multi-camera panoramic sensor coordinates;creating a video clip comprising a sequence of image frames in the set of image frames, wherein the times of image frame capture for the sequence of image frames correspond to the times of radar detections; anddisplaying the video clip, wherein the video clip displays the object of interest. 18. The computer program product as set forth in claim 17, further comprising instructions for causing the processor to perform operations of: comparing, with an active cognitive processor module, each image frame in the set of image frames to a background model;detecting, with the active cognitive processor module, at least one cognitive detection in an image frame, wherein the at least one cognitive detection corresponds to a region of the scene that deviates from the background model and represents the object of interest;assigning, with the active cognitive processor module, a cognitive score and a bounding box to each cognitive detection to aid in user analysis, wherein a higher cognitive score corresponds to a greater deviation from the background model, and the bounding box surrounds the object of interest; andstoring the cognitive detections having the highest cognitive scores in the frame storage database. 19. The computer program product as set forth in claim 18, wherein the one or more processors further perform an operation of managing cognitive detections according to the following: requesting a list of cognitive detections having the highest cognitive scores from the active cognitive processor module;for each cognitive detection in the list, requesting a sequence of image frames comprising the image frame corresponding to the cognitive detection and a plurality of image frames before and after the image frame corresponding to the cognitive detection from the capture and recording module;for each cognitive detection in the list, constructing a video sequence corresponding to the time of the cognitive detection from the sequence of image frames; andfor each cognitive detection in the list, sending the video sequence to the user interface for user analysis. 20. The computer program product as set forth in claim 19, further comprising instructions for causing the processor to perform an operation of retrieving from the active cognitive processor module a cognitive score for a region of a field-of-view of a scene in which a radar detection originated. 21. The computer program product as set forth in claim 20, further comprising instructions for causing the processor to perform an operation of detecting, in parallel, objects of interest with both the active cognitive processor module and the radar sensor independently. 22. The computer program product as set forth in claim 18, further comprising instructions for causing the processor to perform operations of: using the bounding box to perform a classification of the cognitive detection in a classification module using object recognition;applying a tracker to the bounding box, and tracking the bounding box across image frames using a tracking module,wherein a user can utilize the tracker to switch between at least one image frame in the video clip corresponding to a radar detection to a current location of the object of interest. 23. The computer program product as set forth in claim 17, wherein the one or more processors further perform an operation of forwarding the video clip to a reactive cognitive processor module, wherein the reactive cognitive processor module performs operations of: comparing the image frames in the video clip to a background model;detecting at least one cognitive detection in at least one image frame in the video clip, wherein the cognitive detection corresponds to a region of the scene that deviates from the background model and represents the object of interest; andassigning a cognitive score and a bounding box to each cognitive detection to aid in user analysis, wherein a higher cognitive score corresponds to a greater deviation from the background model, and the bounding box surrounds the object of interest. 24. The computer program product as set forth in claim 17, further comprising instructions for causing the processor to perform operations of: using a plurality of multi-camera panoramic staring sensors to continuously capture the set of image frames of the field-of-view of the scene; andusing a plurality of radar sensors to detect the object of interest to enable the system to scale up the field-of-view to any predetermined value up to a 360-degree field-of-view. 25. A system for surveillance, the system comprising: a multi-camera panoramic staring sensor;a radar sensor;one or more processors and a non-transitory computer-readable medium having executable instructions encoded thereon such that when executed, the one or more processors perform operations of: storing a set of image frames of a field-of-view of a scene captured using the multi-camera panoramic staring sensor in a frame storage database, wherein each image frame is marked with a time of image frame capture;generating a radar detection when the radar sensor detects a object of interest in the field-of-view of the scene;based on the radar detection, generating a radar message, marked with a time of radar detection, indicating the presence of the object of interest;for each radar detection, converting a set of radar coordinates corresponding to the radar detection into a set of multi-camera panoramic sensor coordinates;creating a video clip comprising a sequence of image frames in the set of image frames, wherein the times of image frame capture for the sequence of image frames correspond to the times of radar detections; anddisplaying the video clip, wherein the video clip displays the object of interest.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (12)
Sun John (Thousand Oaks CA) Sanders Ross J. (Newbury Park CA) Starace Ralph C. (Camarillo CA) Hackman William K. (Thousand Oaks CA), Air defense destruction missile weapon system.
Khosla, Deepak; Huber, David J.; Bhattacharyya, Rajan, System for optimal rapid serial visual presentation (RSVP) from user-specific neural brain signals.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.