Method apparatus system and computer program product for automated collection and correlation for tactical information
원문보기
IPC분류정보
국가/구분
United States(US) Patent
등록
국제특허분류(IPC7판)
H04N-007/18
G06F-017/30
출원번호
US-0772541
(2010-05-03)
등록번호
US-8896696
(2014-11-25)
발명자
/ 주소
Ellsworth, Christopher C.
Baity, Sean
Johnson, Brandon
Geer, Andrew
출원인 / 주소
AAI Corporation
대리인 / 주소
Venable LLP
인용정보
피인용 횟수 :
1인용 특허 :
25
초록▼
A method, and corresponding system, apparatus, and computer program product for automated collection and correlation for tactical information includes identifying an entity in imagery based on a field of view of the imagery using a processor, creating a relationship between the imagery and the entit
A method, and corresponding system, apparatus, and computer program product for automated collection and correlation for tactical information includes identifying an entity in imagery based on a field of view of the imagery using a processor, creating a relationship between the imagery and the entity, and storing the relationship in a database.
대표청구항▼
1. A method for automated collection and correlation for tactical information, the method comprising: receiving, by one or more processors, information for a plurality of entities, wherein the information includes geo-spatial information;storing, by the one or more processors, the information for th
1. A method for automated collection and correlation for tactical information, the method comprising: receiving, by one or more processors, information for a plurality of entities, wherein the information includes geo-spatial information;storing, by the one or more processors, the information for the plurality of entities in a database;receiving, by the one or more processors, imagery and data collected from one or more remote sensors;storing, by the one or more processors, the imagery and the data from one or more remote sensors;determining, by the one or more processors, a field of view of the imagery based on geo-spatial information regarding the one or more remote sensors sensing the imagery;retrieving, by the one or more processors, the geo-spatial information of the plurality of entities;comparing, by the one or more processors, the geo-spatial information of the plurality of entities with the field of view of the imagery to determine if one or more of the plurality of entities are within the field of view;automatically creating, by the one or more processors, a relationship between the imagery at a time that the one or more of the plurality of entities are within the field of view and the one or more of the plurality of entities within the field of view;storing, by the one or more processors, the relationship between the imagery at the time that the entity is within the field of view and the one or more of the plurality of entities within the field of view in the databasecalculating, by the one or more processors, an association matrix between the plurality of entities, the association matrix calculated using the information for the plurality of entities, the relationship, the imagery, and the data from one or more remote sensors;displaying, by the one or more processors, a link diagram of the plurality of entities based on the association matrix between the plurality of entities; anddisplaying, by the one or more processors, video received from the one or more remote sensors with one or more of the plurality of entities overlaid on the displayed video. 2. The method of claim 1, further comprising calculating geo-spatial information of one or more of the plurality of entities. 3. The method of claim 1, further comprising: receiving geo-spatial information of the imagery from the one or more remote sensors. 4. The method of claim 1, wherein the one or more remote sensors comprise an unmanned aerial vehicle (UAV). 5. The method of claim 1, wherein said imagery comprises at least one of video clips or snapshots. 6. The method of claim 5, further comprising: capturing imagery with a remote sensor. 7. The method of claim 1, wherein the plurality of entities comprise at least one of a person, a vehicle, an event, a facility, a mission, a location, or an object. 8. The method of claim 7, wherein the plurality of entities are defined based on information from at least one of an event or signal activation. 9. The method of claim 1, wherein the data collected from the one or more remote sensors comprises: location, heading, altitude, sensor depression, damage, sensor stare point, footprint field of view, or battery level. 10. The method of claim 1, further comprising: receiving a selection of one of the plurality of entities overlaid on the displayed video, wherein the selected overlaid entity is moved within the displayed video and a location of the moved overlaid entity is updated in the database. 11. The method of claim 1, further comprising: displaying a map;receiving a selection to drag and drop one of the plurality of entities onto the displayed map;determining a location of the dropped entity; andupdating the database with the location of the dropped entry. 12. The method of claim 1, further comprising: inserting one or more annotations into the displayed video, wherein the annotations are associated with one or more entities of the plurality of entities. 13. A non-transitory computer readable medium storing computer readable program code, that when executed by a computer, cause the computer to: receive information on a plurality of entities;store the information on the plurality of entities in a database;display a graphical user interface for a user to draw a polygon defining an area of interest, wherein the area of interest contains one or more of the plurality of entities;receive the defined area of interest;determine a field of view of one or more remote sensors based on geo-spatial information regarding the remote sensor;store imagery and data from the one or more remote sensors;monitor a field of view of the one or more remote sensors;determine when the field of view and the area of interest intersect using the processor;capture imagery of the remote sensor when there is an intersection;store the captured imagery and the geo-spatial information at a time there is an intersection in a database;create a relationship between the captured imagery and the one or more of the plurality of entities contained in the area of interest;store the relationship in the database;calculate an association matrix between the plurality of entities, the association matrix calculated using the information on the plurality of entities, the relationship, the imagery, and the data from the one or more remote sensors;display a link diagram of the plurality of entities based on the association matrix between the plurality of entities; anddisplay video received from the one or more remote sensors with one or more of the plurality of entities overlaid on the displayed video. 14. The non-transitory computer readable medium of claim 13, wherein the computer readable program code further causes the computer to: calculate the field of view of the one or more remote sensors based on geo-spatial information from the one or more remote sensors. 15. The non-transitory computer readable medium of claim 13, wherein the computer readable program code further causes the computer to: determine geo-spatial information of an entity of the plurality of entities indicating the entity is within the field of view of the one or more remote sensors. 16. The non-transitory computer readable medium of claim 15, wherein the computer readable program code further causes the computer to: search a database of entities to determine the geo-spatial information of at least one entity of the plurality of entities that intersects with the field of view of the sensor. 17. The non-transitory computer readable medium of claim 15, wherein the plurality of entities comprise at least one of a person, a vehicle, an event, a facility, a mission, a location, or an object. 18. The non-transitory computer readable medium of claim 13, wherein the data stored from the one or more remote sensors comprises: location, heading, altitude, sensor depression, damage, sensor stare point, footprint field of view, or battery level. 19. The non-transitory computer readable medium of claim 13, wherein the computer readable program code further causes the computer to: receive a selection of one of the plurality of entities overlaid on the displayed video, wherein the selected overlaid entity is moved within the displayed video and a location of the moved overlaid entity is updated in the database. 20. The non-transitory computer readable medium of claim 13, wherein the computer readable program code further causes the computer to: display a map;receive a selection to drag and drop one of the plurality of entities onto the displayed map;determine a location of the dropped entity; andupdate the database with the location of the dropped entry. 21. The non-transitory computer readable medium of claim 13, wherein the computer readable program code further causes the computer to: Insert one or more annotations into the displayed video, wherein the annotations are associated with one or more entities of the plurality of entities.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (25)
LinneVonBerg, Dale C.; Kruer, Melvin; Colbert, Michael; Smith, Russell, Airborne real time image exploitation system (ARIES).
Lareau Andre G. ; Beran Stephen R. ; James Brian ; Quinn James P. ; Lund John, Autonomous electro-optical framing camera system with constant ground resolution, unmanned airborne vehicle therefor, and methods of use.
Davis, Curtis Herbert; Klaric, Matthew Nicholas; Scott, Grant Jason; Shyu, Chi-Ren, Identifying geographic-areas based on change patterns detected from high-resolution, remotely sensed imagery.
Au, KwongWing; Curtner, Keith L.; Bedros, Saad J., Methods for defining, detecting, analyzing, indexing and retrieving events using video image processing.
Roy, Philippe; Yu, Jun; Linden, David S., Methods, apparatus and systems for enhanced synthetic vision and multi-sensor data fusion to improve operational capabilities of unmanned aerial vehicles.
Griffin Arthur F. (Redondo Beach CA) Moise Norton L. (Pacific Palisades CA), Predictive look ahead memory management for computer image generation in simulators.
Mershon, Jeffrey David; Armstrong, Jef Russell; Reininger, Daniel J.; Cohen, Andrew Jacob; Kulberda, Raymond William; Makwana, Dhananjay D.; Mago, Rajeev, Systems and methods for the management of information to enable the rapid dissemination of actionable information.
Rhoads, Geoffrey B.; Lofgren, Neil E.; Patterson, Philip R., Systems and methods using identifying data derived or extracted from video, audio or images.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.