[미국특허]
Method and system for non-causal zone search in video monitoring
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
G06K-009/00
H04L-029/06
H04N-021/431
H04N-021/433
H04N-021/4335
H04W-012/02
H04W-012/06
G06T-007/20
H04N-007/18
G06F-003/0481
G06F-003/0484
G11B-027/00
G11B-027/028
H04L-029/08
G06F-003/0482
G11B-027/031
G11B-027/30
G11B-027/34
G06K-009/32
H04N-005/14
H04N-021/239
G11B-027/10
H04N-005/93
H04N-009/87
H04W-004/00
H04W-012/08
G08B-013/196
H04W-012/04
G06F-003/0485
G06F-003/0488
H04N-021/2187
H04N-021/2743
H04N-021/462
H04N-021/422
출원번호
US-0510029
(2014-10-08)
등록번호
US-9779307
(2017-10-03)
발명자
/ 주소
Laska, Jason N.
Nelson, Gregory R.
Duffy, Greg
Mitsuji, Hiro
Hill, Cameron
Davidsson, Martin
Montalbo, Michael D.
Wan, Tung Yuen
출원인 / 주소
GOOGLE INC.
대리인 / 주소
Morgan, Lewis & Bockius LLP
인용정보
피인용 횟수 :
0인용 특허 :
74
초록▼
A computing system processes a video recording to identify a plurality of motion events, each corresponding to a respective video segment along a timeline of the video recording. The computing system identifies at least one object in motion within a scene depicted in the video recording and stores a
A computing system processes a video recording to identify a plurality of motion events, each corresponding to a respective video segment along a timeline of the video recording. The computing system identifies at least one object in motion within a scene depicted in the video recording and stores a respective event mask for each event. The computing system receives a definition of a zone of interest within the scene. In response to receiving the definition, the computing system determines, for each motion event, whether the respective event mask of the motion event overlaps with the zone of interest by at least a predetermined overlap factor, and identifies one or more events of interest from the plurality of motion events, wherein the respective event mask of each identified event of interest is determined to overlap with the zone of interest by at least the predetermined overlap factor.
대표청구항▼
1. A method of facilitating review of a video recording by performing a retrospective event search, comprising: processing the video recording to identify a plurality of motion events, each motion event corresponding to a respective video segment along a timeline of the video recording, and identify
1. A method of facilitating review of a video recording by performing a retrospective event search, comprising: processing the video recording to identify a plurality of motion events, each motion event corresponding to a respective video segment along a timeline of the video recording, and identifying at least one object in motion within a scene depicted in the video recording;obtaining and storing a respective event mask for each of the plurality of motion events identified in the video recording, the respective event mask including an aggregate of motion pixels from multiple frames of the motion event, wherein the motion pixels are associated with the at least one object in motion in the multiple frames of the motion event;after processing the video recording, receiving a definition of a zone of interest within the scene depicted in the video recording; andin response to receiving the definition of the zone of interest: determining, for each of the plurality of motion events, whether the respective event mask of the motion event overlaps with the zone of interest by at least a predetermined overlap factor; andidentifying one or more events of interest from the plurality of motion events, wherein the respective event mask of each of the identified events of interest is determined to overlap with the zone of interest by at least the predetermined overlap factor. 2. The method of claim 1, wherein obtaining and storing a respective event mask for each of the plurality of motion events identified in the video recording comprises: generating the respective event mask for each of the plurality of motion events, wherein the generating includes: creating a respective binary motion pixel map for each frame of the respective video segment associated with the motion event; andcombining the respective binary motion pixel maps of all frames of the respective video segment to generate the respective event mask for the motion event. 3. The method of claim 1, further comprising: receiving a first selection input from the user to select the zone of interest as a first event filter; andvisually labeling the one or more identified events of interest with a respective indicator associated with the zone of interest in an event review interface. 4. The method of claim 1, wherein the definition of the zone of interest includes a plurality of vertices specified in the scene of the video recording. 5. The method of claim 1, further comprising: after receiving the definition of the zone of interest: processing a live video stream depicting the scene of the video recording to detect a start of a live motion event;generating a live event mask based on respective motion pixels associated with a respective object in motion identified in the live motion event;determining, in real-time, whether the live event mask overlaps with the zone of interest by at least the predetermined overlap factor; andin accordance with a determination that the live event mask overlaps with the zone of interest by at least the predetermined overlap factor, generating a real-time event alert for the zone of interest. 6. The method of claim 5, further comprising: visually labeling the live motion event with an indicator associated with the zone of interest in an event review interface. 7. A computing system for facilitating review of a video recording by performing a retrospective event search, comprising: one or more processors; andmemory having instructions stored thereon, the instructions, when executed by the one or more processors, cause the processors to perform operations comprising: processing the video recording to identify a plurality of motion events, each motion event corresponding to a respective video segment along a timeline of the video recording, and identifying at least one object in motion within a scene depicted in the video recording;obtaining and storing a respective event mask for each of the plurality of motion events identified in the video recording, the respective event mask including an aggregate of motion pixels from multiple frames of the motion event, wherein the motion pixels are associated with the at least one object in motion in the multiple frames of the motion event;after processing the video recording, receiving a definition of a zone of interest within the scene depicted in the video recording; andin response to receiving the definition of the zone of interest: determining, for each of the plurality of motion events, whether the respective event mask of the motion event overlaps with the zone of interest by at least a predetermined overlap factor; andidentifying one or more events of interest from the plurality of motion events, wherein the respective event mask of each of the identified events of interest is determined to overlap with the zone of interest by at least the predetermined overlap factor. 8. The computing system of claim 7, wherein obtaining and storing a respective event mask for each of the plurality of motion events identified in the video recording comprise: generating the respective event mask for each of the plurality of motion events, wherein the generating includes: creating a respective binary motion pixel map for each frame of the respective video segment associated with the motion event; andcombining the respective binary motion pixel maps of all frames of the respective video segment to generate the respective event mask for the motion event. 9. The computing system of claim 7, wherein the operations further comprise: receiving a first selection input from the user to select the zone of interest as a first event filter; andvisually labeling the one or more identified events of interest with a respective indicator associated with the zone of interest in an event review interface. 10. The computing system of claim 7, wherein the definition of the zone of interest includes a plurality of vertices specified in the scene of the video recording. 11. The computing system of claim 7, wherein the operations further comprise: after receiving the definition of the zone of interest: processing a live video stream depicting the scene of the video recording to detect a start of a live motion event;generating a live event mask based on respective motion pixels associated with a respective object in motion identified in the live motion event;determining, in real-time, whether the live event mask overlaps with the zone of interest by at least the predetermined overlap factor; andin accordance with a determination that the live event mask overlaps with the zone of interest by at least the predetermined overlap factor, generating a real-time event alert for the zone of interest. 12. The computing system of claim 11, wherein the operations further comprise: visually labeling the live motion event with an indicator associated with the zone of interest in an event review interface. 13. A non-transitory computer-readable medium for facilitating review of a video recording by performing a retrospective event search, the non-transitory computer-readable medium having instructions stored thereon, the instructions, when executed by one or more processors, cause the processors to perform operations comprising: processing the video recording to identify a plurality of motion events, each motion event corresponding to a respective video segment along a timeline of the video recording, and identifying at least one object in motion within a scene depicted in the video recording;obtaining and storing a respective event mask for each of the plurality of motion events identified in the video recording, the respective event mask including an aggregate of motion pixels from multiple frames of the motion event, wherein the motion pixels are associated with the at least one object in motion in the multiple frames of the motion event;after processing the video recording, receiving a definition of a zone of interest within the scene depicted in the video recording; andin response to receiving the definition of the zone of interest: determining, for each of the plurality of motion events, whether the respective event mask of the motion event overlaps with the zone of interest by at least a predetermined overlap factor; andidentifying one or more events of interest from the plurality of motion events, wherein the respective event mask of each of the identified events of interest is determined to overlap with the zone of interest by at least the predetermined overlap factor. 14. The computer-readable medium of claim 13, wherein obtaining and storing a respective event mask for each of the plurality of motion events identified in the video recording comprise: generating the respective event mask for each of the plurality of motion events, wherein the generating includes: creating a respective binary motion pixel map for each frame of the respective video segment associated with the motion event; andcombining the respective binary motion pixel maps of all frames of the respective video segment to generate the respective event mask for the motion event. 15. The computer-readable medium of claim 13, wherein the operations further comprise: receiving a first selection input from the user to select the zone of interest as a first event filter; andvisually labeling the one or more identified events of interest with a respective indicator associated with the zone of interest in an event review interface. 16. The computer-readable medium of claim 13, wherein the definition of the zone of interest includes a plurality of vertices specified in the scene of the video recording. 17. The computer-readable medium of claim 13, wherein the operations further comprise: after receiving the definition of the zone of interest: processing a live video stream depicting the scene of the video recording to detect a start of a live motion event;generating a live event mask based on respective motion pixels associated with a respective object in motion identified in the live motion event;determining, in real-time, whether the live event mask overlaps with the zone of interest by at least the predetermined overlap factor; andin accordance with a determination that the live event mask overlaps with the zone of interest by at least the predetermined overlap factor, generating a real-time event alert for the zone of interest.
Saft, Keith D.; Bernoulli, Carlo P.; Parry, Lee J.; Shephard, Megan D.; Poelker, Cole J., Graphic user interface for displaying content selections on a display panel.
Saft, Keith D.; Bernoulli, Carlo P.; Parry, Lee J.; Shephard, Megan D.; Poelker, Cole J., Graphical user interface for displaying content selections on a display panel.
Desimone, Michael J.; Hampapur, Arun; Lu, Zuoxuan; Mercier, Carl P.; Milite, Christopher S.; Russo, Stephen R.; Shu, Chiao-Fe; Tan, Chek K., Identifying spatial locations of events within video image data.
Laska, Jason N.; Nelson, Gregory R.; Duffy, Greg; Mitsuji, Hiro; Hill, Cameron; Davidsson, Martin; Montalbo, Michael D.; Wan, Tung Yuen, Method and system for cluster-based video monitoring and event categorization.
Laska, Jason N.; Nelson, Greg R.; Duffy, Greg; Hill, Cameron; Davidsson, Martin, Method and system for retroactively changing a display characteristic of event indicators on an event timeline.
Sharma, Rajeev; Mummareddy, Satish; Hershey, Jeff; Jung, Namsoon, Method and system for segmenting people in a physical space based on automatic behavior analysis.
Schonfeld,Dan; Hariharakrishnan,Karthik; Raffy,Philippe; Yassa,Fathy, Occlusion/disocclusion detection using K-means clustering near object boundary with comparison of average motion of clusters to object and background motions.
Borzycki, Andrew; Deva, Mallikharjuna Reddy; Gajendar, Uday Nandigam; Roychoudhry, Anil, Single sign-on access in an orchestration framework for connected devices.
Lane, Corey A.; Buck, Heidi L.; Li, Joshua S.; Bagnall, Bryan D.; Stastny, John C.; Hallenborg, Eric C., System for tracking maritime domain targets from full motion video.
Laska, Jason N.; Hua, Wei; Chaudhry, Rizwan Ahmed; Varadharajan, Srivatsan; Heitz, III, George Alban, Systems and methods for categorizing motion event candidates.
Watts, Tim J.; Offerdahl, Alex; Atwood, Joe; Driscoll, Jim; Loar, Steve; Stabnow, Jeff, Systems, computer-implemented methods, and computer medium to determine premiums and indemnities for supplemental crop insurance.
Wilson Charles Park ; Pedersen ; Jr. Chris Harvey ; Auyeung Alex Kamlun ; MacCormack David Ross, Video data capture and formatting in intelligent video information management system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.