A video tracking system includes a user interface configured to facilitate tracking of a target between video cameras. The user interface includes user controls configured to assist a user in selecting video cameras as the target moves between fields of view of the video cameras. These user controls
A video tracking system includes a user interface configured to facilitate tracking of a target between video cameras. The user interface includes user controls configured to assist a user in selecting video cameras as the target moves between fields of view of the video cameras. These user controls are automatically associated with specific cameras based on a camera topology relative to a point of view of a camera whose video data is currently being viewed. The video tracking system further includes systems and methods of synchronizing video data generated using the video cameras and of automatically generating a stitched video sequence based on the user selection of video cameras. The target may be tracked in real-time or in previously recorded video and may be track forward or backwards in time.
대표청구항▼
1. A system comprising: a first data storage configured to store a plurality of video data received from a plurality of video sources, respectively;a second data storage configured to store data representative of first and second camera topologies between the plurality of video sources, the first an
1. A system comprising: a first data storage configured to store a plurality of video data received from a plurality of video sources, respectively;a second data storage configured to store data representative of first and second camera topologies between the plurality of video sources, the first and second camera topologies being relative to viewpoints of the corresponding video sources;wherein the first camera topology includes a first association between a field of view of a first member of the plurality of video sources and a second member of the plurality of video sources, and a second association between the field of view of the first member of the plurality of video sources and a third member of the plurality of video sources;wherein the second camera topology includes an association between the field of view of the second member of the plurality of video sources and the field of view of the third member of the plurality of video sources;memory storing computing instructions; andone or more processors configured to execute the computing instructions to provide: user interface (UI) generation logic configured to generate a user interface for display to a user, the user interface including a region to display the plurality of video data and a plurality of user inputs configured for the user to navigate between the plurality of video data based on the first and second camera topologies, the UI generation logic configured to present first and second user controls at positions, based on the first and second camera topologies, around a presentation of the video data generated using a first member of the plurality of video sources, wherein the first and second user controls are presented at positions around the presentation of the video data that are relative to the viewpoint of the first member; andvideo data selection (VDS) logic configured to select which member of the plurality of video data to display in the user interface responsive to the plurality of user inputs. 2. The system of claim 1, wherein the first data storage is further configured to store one or more index to the plurality of video data. 3. The system of claim 1, wherein the first camera topology includes at least four view regions. 4. The system of claim 1, wherein first camera topology includes an association between a field of view of the first member of the plurality of video sources and more than one other members of the plurality of video sources. 5. The system of claim 4, wherein the more than one other members of the plurality of video sources are ranked in association with the view region. 6. The system of claim 5, wherein the more than one other members of the plurality of video sources are ranked based on a historical tracking pattern. 7. The system of claim 1, wherein the camera topology is determined in real time using positioning system data. 8. The system of claim 1, wherein at least the first member of the plurality of video sources includes a video source whose position is remotely controllable by a user. 9. The system of claim 1, wherein the plurality of user inputs are associated with members of the plurality of video sources based on one of the one or more camera topology and which member of the plurality of video data is selected to be displayed in the user interface. 10. The system of claim 1, wherein the plurality of user inputs are associated with members of the plurality of video sources based on one of the one or more camera topology. 11. The system of claim 1, wherein the UI generation logic is configured to display the plurality of video data in an accelerated mode. 12. The system of claim 1, wherein the UI generation logic is configured for tracking a target from recorded video data to real-time video data. 13. The system of claim 1, wherein the UI generation logic is configured for tracking a target back in time. 14. The system of claim 1, wherein the UI generation logic is configured for tracking a target from real-time video data to recorded video data. 15. The system of claim 1, wherein the VDS logic is configured to temporally synchronize a first member of the plurality of video data and a second member of the plurality of video data. 16. The system of claim 1, further comprising VDS data storage configured to store an estimated travel time between members of the plurality of video sources, wherein the VDS logic is configured to use a time offset based on the estimated travel time. 17. The system of claim 1, further including stitching logic configured to automatically stitch parts of the plurality of video data into a video sequence based on navigation between the plurality of video data as performed by the user using the plurality of user inputs. 18. The system of claim 1, further including stitching logic configured to stitch parts of the plurality of video data into a video sequence and to generate thumbnails configured for navigating the video sequence. 19. The system of claim 1, further including stitching logic configured to stitch parts of the plurality of video data into a video sequence, and to associate user notes with the video sequence. 20. The system of claim 1, further including video input interface logic configured to convert the plurality of video data from a plurality of different formats to a common format. 21. The system of claim 1, further including report generating logic configured to automatically include a video sequence in a report, the video sequence including the plurality of video data. 22. The system of claim 1, wherein the UI generation logic and the VDS logic each comprise hardware, or computing instructions stored on a computer readable medium. 23. The system of claim 1, wherein the UI generation logic is configured to present at least first and second user controls in positions around a primary window displaying the video data where the positions of the user controls are relative to the viewpoint of a first video source. 24. The system of claim 1, wherein the VDS logic is configured to receive a user selection of a second user control and reassociate first and second user controls with different video sources based on a view point of a video source associated with the second user control. 25. The system of claim 1, further comprising report generator logic configured to generate a report that includes video sequences and thumbnails from the plurality of video sources. 26. The system of claim 1, further comprising report generator logic configured to generate a report based on a template, the report including a thumbnail from a camera and permitting a user to add notes to the report. 27. The system of claim 1, wherein at least one of the video sources represents a portable camera, the VDS logic configured to determine a location of the portable camera. 28. The system of claim 1, further comprising report generator logic configured to generate a report that includes a listing of areas through which a target travels based on which members of the video sources are selected to track the target. 29. The system of claim 1, wherein the UI generation logic is configured to track a target using stairs, the first and second user controls being associated with members of the plurality of video sources that track the target moving upstairs and downstairs, respectively. 30. The system of claim 1, wherein the first and second user controls represent physical buttons on the user interface. 31. The system of claim 1, wherein, when a cursor is moved over one of the user controls, the UI generation logic is configured to display video data generated using an associated member of the plurality of video sources. 32. The system of claim 1, wherein the UI generation logic is configured to move between the video data presented on the user interface from different members of the plurality of video sources in a time synchronized manner such that, when a user selects to switch from viewing pre-recorded video data recorded at a prior point in time to a new selected member, video data recorded by the new selected member is presented corresponding to the prior point in time. 33. The system of claim 1, further comprising a global positioning system (GPS) input configured to receive at least one of location and orientation information from a video source, the VDS logic utilizing the at least one of location and orientation information to determine the one or more camera topology. 34. The system of claim 33, wherein the orientation information is utilized to determine a direction in which an associated video source is pointed. 35. The system of claim 1, wherein the UI generation logic is configured to associate a group of video sources with a second user control, when the user selects the second user control, the UI generation logic configured to present a list of the group of video sources from which to choose. 36. The system of claim 1, wherein the UI generation logic is configured to associate a group of video sources with a second user control, when the user selects the second user control, the UI generation logic configured to present a group of thumbnails showing video data generated by corresponding members from the group of video sources. 37. The system of claim 1, wherein the UI generation logic is configured to present a thumbnail that includes a still image and is associated with at least one time point and a member of the plurality of video sources. 38. The system of claim 37, wherein the thumbnail represents a sequence thumbnail that represents an index to at least one of the plurality of video data. 39. The system of claim 37, wherein the VDS logic is configured to select the member of the plurality of the video data to display based on the user input clicking on the thumbnail. 40. The system of claim 37, wherein the UI generation logic is configured to present a preview thumbnail when a cursor is hovered over a portion of the user interface, the preview window displaying video data associated with a member of the plurality of video sources. 41. The system of claim 37, wherein the UI generation logic is configured to present the thumbnail when the cursor is hovered over the first user control and the video data is associated with the first member of the plurality of video sources.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (64)
MacFaden,Michael R.; Calato,Paul, Adaptive classification of network traffic.
Girgensohn, Andreas; Dunnigan, Anthony Eric; Shipman, III, Frank M.; Wilcox, Lynn D., Interface for browsing and viewing video from multiple cameras simultaneously that conveys spatial and temporal proximity.
Wood, David L.; Weschler, Paul; Norton, Derk; Ferris, Chris; Wilson, Yvonne; Soley, William R., Log-on service providing credential level change without loss of session continuity.
Glier Michael T. (Chepachet RI) Nunez Linda I. Mensinger (North Kingstown RI) Scofield Christopher L. (Providence RI), Method and apparatus for adaptive classification.
Albaugh, Virgil Anthony; Benantar, Messaoud; Buslawski, John Alexander; Chang, David Yu; High, Jr., Robert Howard, Method and apparatus for multiple security service enablement in a data processing system.
Guenter Brian ; Grimm Cindy Marie ; Malvar Henrique Sarmento, Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects.
Bernstein, Jeffrey Traer; Girling, Lukas Robert Tom; Dong, Linda L.; Penha, Henrique; Lopez, Paulo Michaelo; Manzari, Behkish J., Digital viewfinder user interface for multiple cameras.
Dye, Alan C.; Chaudhri, Imran; Foss, Christopher Patrick; Wilson, Christopher; Lemay, Stephen O.; Butcher, Gary Ian; Yang, Lawrence Y.; Karunamuni, Chanaka G.; Maric, Natalia; Manzari, Johnnie, Remote camera user interface.
Dye, Alan C.; Chaudhri, Imran; Foss, Christopher Patrick; Wilson, Christopher; Lemay, Stephen O.; Butcher, Gary Ian; Yang, Lawrence Y.; Karunamuni, Chanaka G.; Maric, Natalia; Manzari, Johnnie, Remote camera user interface.
Chaudhri, Imran; Lemay, Stephen O.; Dye, Alan C.; Foss, Christopher Patrick; Yang, Lawrence Y.; Graham, David Chance; Ive, Jonathan P., Remote user interface.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.