IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
UP-0823506
(2001-03-28)
|
등록번호 |
US-7839432
(2011-01-22)
|
발명자
/ 주소 |
- Fernandez, Dennis Sunga
- Fernandez, Irene Hu
|
대리인 / 주소 |
|
인용정보 |
피인용 횟수 :
13 인용 특허 :
174 |
초록
▼
Integrated imaging and GPS network monitors remote object movement. Browser interface displays objects and detectors. Database stores object position movement. Cameras detect objects and generate image signal. Internet provides selectable connection between system controller and various cameras acco
Integrated imaging and GPS network monitors remote object movement. Browser interface displays objects and detectors. Database stores object position movement. Cameras detect objects and generate image signal. Internet provides selectable connection between system controller and various cameras according to object positions.
대표청구항
▼
We claim: 1. A system comprising: a movement module configured to receive visual data from a first detector, wherein the visual data is associated with an object in a first observation range, wherein the movement module is further configured to determine a movement vector of a movement of the objec
We claim: 1. A system comprising: a movement module configured to receive visual data from a first detector, wherein the visual data is associated with an object in a first observation range, wherein the movement module is further configured to determine a movement vector of a movement of the object based at least in part on the visual data and object data received from a mobile unit physically associated with the object, wherein the movement module is further configured to determine a second observation range associated with the object; and a processor configured to select the first detector based at least in part on the first observation range, wherein the processor is further configured to select a second detector based at least in part on the movement vector and the second observation range; wherein the first detector is configured to extrapolate a predicted future location of the object and hand-off observation of the object to the second detector in response to the processor selecting the second detector. 2. The system of claim 1, wherein a distance between the second detector and the first detector is greater than a distance between the first detector and a third detector. 3. The system of claim 1, wherein the second detector is activated in response to an instruction from at least one of the processor or the movement module, and wherein the first detector is configured to hand-off the observation of the object to the second detector in response to the predicted future location of the object and an instruction from at least one of the processor or the movement module. 4. The system of claim 1, wherein the mobile unit generates a position signal if the object moves within at least one of the first observation range or the second observation range. 5. The system of claim 1, wherein the mobile unit comprises an accelerometer. 6. The system of claim 1, wherein the processor is further configured to receive from a database object information comprising at least one of an object name, an object identifier, an object, a group, an object query, an object condition, an object status, an object location, an object time, an object error, an object image, a video broadcast signal, a representation of an object identity, or an audio broadcast signal. 7. The system of claim 1, wherein the movement vector is determined using at least one of an extrapolated positional signal, or an extrapolated visual signal. 8. The system of claim 1, wherein the object is authenticated according to at least one of a voice pattern, a magnetic signal, or a smart-card signal. 9. The system of claim 1, wherein an electronic file comprising at least one of a recorded voice transmission, a recorded music transmission, a live voice transmission or a live music transmission is provided to the object via a network. 10. A system comprising: a first detector configured to detect visual data associated with an object in a first observation range, wherein the first detector is further configured to provide the visual data to a movement module, and wherein the first detector is selectable by a processor based at least in part on the first observation range; and a second detector selectable by the processor based at least in part on a movement vector of a movement of the object and a second observation range, wherein the movement vector is determined by the movement module based at least in part on the visual data and object data received from a mobile unit physically associated with the object and configured to detect the object data, and wherein the first detector is configured to extrapolate a predicted future location of the object and hand-off observation of the object to the second detector in response to the second detector being selected by the processor. 11. The system of claim 10, wherein the first detector is configured to extrapolate the predicted future location of the object and hand-off the observation of the object to the second detector in response to the movement vector indicating that the object is about to move into the second observation range. 12. The system of claim 10, wherein the second detector is activated in response to the processor determining that the object is traveling from the first observation range to the second observation range. 13. The system of claim 10, wherein the mobile unit comprises an accelerometer. 14. The system of claim 10, wherein the processor is further configured to receive from a database object information comprising at least one of an object name, an object identifier, an object group, an object query, an object condition, an object status, an object location, an object time, an object error, an object image, a video broadcast signal, a representation of an object identity, or an audio broadcast signal. 15. The system of claim 10, wherein the object is monitored using at least one of an extrapolated positional signal, or an extrapolated visual signal. 16. The system of claim 10, wherein the object is authenticated according to at least one of a voice pattern, a magnetic signal, or a smart-card signal. 17. The system of claim 10, wherein an electronic file comprising at least one of a recorded voice transmission, a recorded music transmission, a live voice transmission or a live music transmission is provided to the object via a network. 18. The system of claim 10, wherein the processor confirms an identity of the object by processing a visual image of the object using at least one of adaptive learning software or neural learning software to recognize the object in real time. 19. A method comprising: selecting a first detector based at least in part on a first observation range, wherein the first detector is configured to observe an object in the first observation range, and wherein the first detector is configured to detect visual data associated with the object; determining a movement vector of a movement of the object based at least in part on the visual data and object data received from a mobile unit physically associated with the object; and selecting a second detector based at least in part on the movement vector and a second observation range associated with the object, wherein the first detector is configured to extrapolate a predicted future location of the object and hand-off observation of the object to the second detector in response to selecting the second detector. 20. The method of claim 19, further comprising activating the second detector in response to the movement vector. 21. The method of claim 19, further comprising activating the second detector in response to a processor determining that the object will be traveling from the first observation range to the second observation range. 22. The method of claim 19, further comprising receiving from a database a representation of an identity and a location of the object, and receiving, from the database, object information comprising at least one of an object name, an object identifier, an object group, an object query, an object condition, an object status, an object location, an object time, an object error, an object image signal, a video broadcast signal, or a audio broadcast signal. 23. The method of claim 19, further comprising monitoring the object using at least one of a predicted object location, an expected object location, an extrapolated positional signal, an extrapolated visual signal, a last-stored positional signal or a last-stored visual signal. 24. The method of claim 19 further comprising authenticating the object according to at least one of a voice pattern, a magnetic signal or a smart-card signal. 25. The method of claim 19 further comprising providing an electronic file having at least one of a recorded voice transmission, a recorded music transmission, a live voice transmission or a live music transmission to the object via a network. 26. The method of claim 19 further comprising confirming the identity of the object by processing a visual image of the object using at least one of adaptive learning software or neural learning software to recognize the object in real time. 27. The system of claim 1, wherein the object data is object location data. 28. The system of claim 3, wherein the processor is configured to select the second detector in response to at least one of the object being in the second observation range, an expectation that the object will be in the second observation range, a predicted trajectory of the object, an actual trajectory of the object being directed toward the second observation range, or the object being about to enter the second observation range. 29. The system of claim 10, wherein the mobile unit, the first detector, and the second detector are configured to communicate wirelessly with the processor through a network. 30. A method comprising: detecting, at a first detector, visual data associated with an object in a first observation range, wherein the first detector is selected by a processor based at least in part on the first observation range; providing the visual data to a movement module, wherein the movement module is configured to determine a movement vector of a movement of the object based at least in part on the visual data and object data received from a mobile unit physically associated with the object and configured to detect the object data; extrapolating a predicted future location of the object; and, handing off observation of the object from the first detector to a second detector in response to the second detector being selected by the processor based at least in part on the movement vector and a second observation range. 31. The method of claim 30, wherein the visual data comprises a representation of a location and an identity of the object. 32. The method of claim 30, wherein the object data comprises GPS location data.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.