A robotic platform is capable of multi-tasking under control of a single remote operator. The platform may include a turret mounted on a positioning mechanism. The turret may incorporate an imaging sensor, a target designator and a weapon in a synchronized manner. The robotic platform is capable of
A robotic platform is capable of multi-tasking under control of a single remote operator. The platform may include a turret mounted on a positioning mechanism. The turret may incorporate an imaging sensor, a target designator and a weapon in a synchronized manner. The robotic platform is capable of switching between different modes: (i) an engaged mode, in which the turret is aligned with the preferred direction of travel of the platform; in the engaged mode the turret is designated by maneuvering the entire robotic platform; (ii) a disengaged mode, in which the robotic platform faces in a first direction while the turret faces another direction to acquired targets. Various functions of driving the platform and operating the turret may be automated to facilitate control by a single operator.
대표청구항▼
1. A robotic platform having a main frame and comprising: a) a first imaging sensor directed in a preferred direction of travel;b) synchronized factors rotatably mounted to the main frame, wherein said synchronized factors include a second imaging sensor, a target designator and a weapon, andc) a re
1. A robotic platform having a main frame and comprising: a) a first imaging sensor directed in a preferred direction of travel;b) synchronized factors rotatably mounted to the main frame, wherein said synchronized factors include a second imaging sensor, a target designator and a weapon, andc) a remote control interface configured for control by a single human operator;wherein said synchronized factors are configured for switching between two modes, i) an engaged mode wherein said synchronized factors are aligned to said first imaging sensor, andii) a disengaged mode wherein said synchronized factors rotate independently of said first imaging sensor. 2. The robotic platform of claim 1, further comprising d) a processor configured for performing a task automatically when said synchronized factors are in said disengaged mode. 3. The robotic platform of claim 2, wherein said task includes at least one action selected from the group consisting of detecting a motion, locking on a target, tracking a target, approaching a target, warning said single human operator of a need for attention, driving the robotic platform, overcoming an obstacle, following a target, retreating from a target, evading a threat and acting in support of a friendly combat unit. 4. The robotic platform of claim 1, wherein said first imaging sensor is synchronized to a second target designator and to a second weapon. 5. The robotic platform of claim 1, wherein said first imaging sensor is configured for reconnaissance in front of the robotic platform while the robotic platform is in said disengaged mode. 6. The robotic platform of claim 1, wherein said synchronized factors are configured to function in said disengaged mode for supplying information on events occurring around a vehicle while the robotic platform is being transported by said vehicle. 7. The robotic platform of claim 1, wherein said remote control interface is configured to utilize an intuitive power of said single human operator. 8. The robotic platform of claim 7, wherein said intuitive power includes at least one ability selected from the group consisting of binocular depth perception, peripheral motion detection, and stereo audio perception. 9. The robotic platform of claim 1, wherein said synchronized factors are mounted to an interchangeable modular assembly. 10. The robotic platform of claim 1, wherein said remote control interface is configured to present to said single human operator an integrated image including an image captured by said first imaging sensor and another image captured by said second imaging sensor. 11. The robotic platform of claim 1, wherein said switching is performed automatically. 12. The robotic platform of claim 11, wherein said switching is performed in reaction to at least one event selected from the group consisting of detecting a movement in an environment around the robotic platform, detecting an attack and detecting a sound. 13. The robotic platform of claim 1, wherein said switching includes at least one action selected from the group consisting of directing said synchronized factors toward a target, designating a target, and activating said weapon towards a target. 14. The robotic platform of claim 1, further comprising: d) a turret and wherein said synchronized factors are mounted to said turret. 15. A method for a single human operator to control a robotic platform having a main frame comprising: a) acquiring a first image from a first imaging sensor directed in a preferred direction of travel;b) providing a remote control interface configured for control by a single human operator;c) switching said synchronized factors from an engaged mode wherein said synchronized factors are aligned to said first imaging sensor to a disengaged mode wherein said synchronized factors rotate independently of said first imaging sensor, andd) directing synchronized factors towards a target wherein said synchronized factors include a rotationally mounted second imaging sensor, a target designator and a weapon. 16. The method of claim 15, further comprising: e) performing a task automatically when said synchronized factors are in said disengaged mode. 17. The method of claim 16, wherein said task includes at least one action selected from the group consisting of; detecting a motion, tracking a target, locking on a target, warning the single human operator of a need for attention, driving the robotic platform, overcoming an obstacle, following a target, approaching a target, retreating from a target, avoiding a threat and acting in support of a friendly combat unit. 18. The method of claim 15, further comprising e) synchronizing said first imaging sensor to a second target designator and to a second weapon. 19. The method of claim 15, further comprising e) supplying information on events occurring in an environment around a vehicle while the robotic platform is being transported by said vehicle using said second imaging sensor in said disengaged mode. 20. The method of claim 15, further comprising: e) utilizing an intuitive power of said single human operator. 21. The method of claim 20, wherein said intuitive power includes at least one ability selected from the group consisting of binocular depth perception, peripheral motion detection and stereo sound perception. 22. The method of claim 15, further comprising: e) changing a modular assembly including said synchronized factors. 23. The method of claim 15, further comprising: e) presenting to said single human operator an integrated image including an image captured by said first imaging sensor and another image captured by said second imaging sensor. 24. The method of claim 15, wherein said switching is performed automatically. 25. The method of claim 15, wherein said switching is performed in reaction to at least one event selected from the group consisting of detecting a movement in an environment around the robotic platform, detecting an attack and detecting a sound.
Hoffman, Orin P. F.; Keefe, Peter; Smith, Eric; Wang, John; Labrecque, Andrew; Ponsler, Brett; Macchia, Susan; Madge, Brian J., Remotely operating a mobile robot.
Hoffman, Orin P. F.; Keefe, Peter; Smith, Eric; Wang, John; Labrecque, Andrew; Ponsler, Brett; Macchia, Susan; Madge, Brian J., Remotely operating a mobile robot.
Hoffman, Orin P. F.; Keefe, Peter; Smith, Eric; Wang, John; Labrecque, Andrew; Ponsler, Brett; Macchia, Susan; Madge, Brian J., Remotely operating a mobile robot.
Gordon, Mark D.; Davies, Jeffrey P.; White, Christopher M.; Vermeer, William H.; Kelly, Timothy J.; Vega, Michael J.; Whitlock, Matthew A., Robotic tool interchange system.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.