Systems, methods, and devices are provided for collecting positional information for and controlling a movable object. In one aspect of the present disclosure, a method for collecting positional information for a movable object includes: receiving data from a first sensing system coupled to the mova
Systems, methods, and devices are provided for collecting positional information for and controlling a movable object. In one aspect of the present disclosure, a method for collecting positional information for a movable object includes: receiving data from a first sensing system coupled to the movable object; receiving data from a second sensing system coupled to the movable object; determining a weight value for the data from the second sensing system based on a strength of the signal received by the sensor of the second sensing system; and calculating the positional information of the movable object based on (i) the data from the first sensing system and (ii) data from the second system factoring in the weight value.
대표청구항▼
1. A method for determining positional information of a movable object, said method comprising: receiving data from a first sensing system coupled to the movable object, said first sensing system comprising a first sensor configured to obtain a first signal useful for determining the positional info
1. A method for determining positional information of a movable object, said method comprising: receiving data from a first sensing system coupled to the movable object, said first sensing system comprising a first sensor configured to obtain a first signal useful for determining the positional information of the movable object;receiving data from a second sensing system coupled to the movable object, said second sensing system comprising a second sensor configured to obtain a second signal useful for determining the positional information of the movable object;receiving data outputted by a third sensing system coupled to the movable object, said third sensing system comprising a third sensor configured to obtain a third signal useful for determining the positional information of the movable object;determining, with aid of a processor, a second weight value for the data from the second sensing system based on a strength of the second signal received by the second sensor;generating, with aid of the processor, an estimation of a positional change of the movable object based on (i) the data from the first sensing system and (ii) the data from the second sensing system factoring in the second weight value;processing the data outputted by the third sensing system by selecting a portion of the data outputted by the third sensing system according to the estimation of the positional change as processed data; andcalculating, with aid of the processor, the positional information of the movable object based on (a) the estimation of the positional change of the movable object, and (b) the processed data. 2. The method of claim 1, wherein the movable object is an unmanned aerial vehicle. 3. The method of claim 1, wherein calculating the positional information of the movable object comprises calculating location information and orientation information of the movable object. 4. The method of claim 1, further comprising: determining, with aid of the processor, a third weight value for the data outputted by the third sensing system based on a vision-based accuracy parameter that is associated with an environment within which the third sensing system operates. 5. The method of claim 4, further comprising: calculating, with aid of the processor, the positional information of the movable object based on the data outputted by the third sensing system factoring in the third weight value. 6. The method of claim 4, wherein the third sensing system comprises one or more vision sensors configured to obtain the third signal, and wherein the vision-based accuracy parameter is indicative of an accuracy of a machine vision algorithm that is used to analyze the third signal. 7. The method of claim 6, wherein the third weight value is lower when the machine vision algorithm is less accurate for the environment in which the movable object operates. 8. The method of claim 7, wherein the environment is at high altitude and/or above water. 9. The method of claim 1, wherein the second sensor comprises a global positioning system (GPS) sensor configured to obtain the second signal, and wherein the strength of the second signal received by the GPS sensor of the second sensing system depends on a number of GPS signals received and/or a magnitude of the GPS signals received. 10. The method of claim 9, wherein the second weight value for the data from the second sensing system has (1) a higher weight value when the strength of the second signal is high, and (2) a lower weight value when the strength of the second signal is low. 11. The method of claim 1, wherein the third sensing system comprises one or more vision sensors configured to obtain the third signal. 12. The method of claim 1, wherein said first sensing system comprises an inertial measurement unit (IMU). 13. The method of claim 1, further comprising: receiving an instruction for the movable object to return to a preset location. 14. The method of claim 13, further comprising: calculating, with aid of the processor, a route for the movable object to travel to return to the preset location, based on the calculated positional information of the movable object. 15. The method of claim 1, further comprising storing, with aid of a path storage module, the positional information of the movable object at a plurality of different time points; and defining, with aid of the processor, a flight path for the movable object using the positional information of the movable object at a plurality of different time points. 16. The method of claim 1, further comprising determining, with aid of the processor, a velocity rule for the movable object based on the positional information, wherein the velocity rule defines a maximum or minimum velocity for the movable object. 17. The method of claim 1, further comprising: feeding the positional information back to at least one of the first sensing system, the second sensing system, or the third sensing system. 18. A system for determining positional information of a movable object, said system comprising: a first sensing system coupled to the movable object, said first sensing system comprising a first sensor configured to obtain a first signal useful for determining the positional information of the movable object;a second sensing system coupled to the movable object, said second sensing system comprising a second sensor configured to obtain a second signal useful for determining the positional information of the movable object; anda third sensing system coupled to the movable object, said third sensing system comprising a third sensor configured to obtain a third signal useful for determining the position information of the movable object;a processor configured to: determine a second weight value for data received from the second sensing system based on a strength of the second signal;generate an estimation of a positional change of the movable object based on (i) data from the first sensing system and (ii) the data from the second sensing system factoring in the second weight value;process data output by the third sensing system by selecting a portion of the data outputted by the third sensing system according to the estimation of the positional change as processed data; andcalculate the positional information of the movable object based on (i) the estimation of the positional change of the movable object, and (ii) the processed data. 19. The system of claim 18, wherein the processor is further configured to determine a third weight value for the data outputted by the third sensing system based on a vision-based accuracy parameter that is associated with an environment within which the third sensing system operates. 20. The system of claim 19, wherein the processor is further configured to calculate the positional information of the movable object based on the data outputted by the third sensing system factoring in the third weight value.
연구과제 타임라인
LOADING...
LOADING...
LOADING...
LOADING...
LOADING...
이 특허에 인용된 특허 (64)
Cline, Duane M.; Milkie, Thomas T., Acoustic airspace collision detection system.
Fregene, Kingsley O. C.; Elgersma, Michael R.; Dajani-Brown, Samar; Pratt, Stephen G., Automatic planning and regulation of the speed of autonomous vehicles.
van Tooren, Joost; Heni, Martin; Knoll, Alexander; Beck, Johannes, Collision and conflict avoidance system for autonomous unmanned air vehicles (UAVs).
Newcombe, Richard; Izadi, Shahram; Molyneaux, David; Hilliges, Otmar; Kim, David; Shotton, Jamie Daniel Joseph; Kohli, Pushmeet; Fitzgibbon, Andrew; Hodges, Stephen Edward; Butler, David Alexander, Mobile camera localization using depth maps.
Duggan, David S.; Felio, David A.; Pate, Billy B.; Longhi, Vince R.; Petersen, Jerry L.; Bergee, Mark J., Multi-sensor autonomous control of unmanned aerial vehicles.
Beard,Randal W.; Johnson,Walter H.; Christiansen,Reed; Hintze,Joshua M.; McLain,Timothy W., Programmable autopilot system for autonomous flight of unmanned aerial vehicles.
Moitra,Abha; Mattheyses,Robert M.; Szczerba,Robert J.; Hoebel,Louis J.; Didomizio,Virginia A.; Yamrom,Boris, Real-time route and sensor planning system with variable mission objectives.
Sislak, David; Pechoucek, Michal; Volf, Premysl; Marik, Vladimir; Losiewicz, Paul, System and method for planning/replanning collision free flight plans in real or accelerated time.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.