IPC분류정보
국가/구분 |
United States(US) Patent
등록
|
국제특허분류(IPC7판) |
|
출원번호 |
US-0807325
(2010-09-02)
|
등록번호 |
US-8381982
(2013-02-26)
|
발명자
/ 주소 |
- Kunzig, Robert S.
- Taylor, Robert M.
- Emanuel, David C.
- Maxwell, Leonard J.
|
출원인 / 주소 |
|
인용정보 |
피인용 횟수 :
9 인용 특허 :
41 |
초록
▼
A method and apparatus for managing manned and automated utility vehicles, and for picking up and delivering objects by automated vehicles. A machine vision image acquisition apparatus determines the position and the rotational orientation of vehicles in a predefined coordinate space by acquiring an
A method and apparatus for managing manned and automated utility vehicles, and for picking up and delivering objects by automated vehicles. A machine vision image acquisition apparatus determines the position and the rotational orientation of vehicles in a predefined coordinate space by acquiring an image of one or more position markers and processing the acquired image to calculate the vehicle's position and rotational orientation based on processed image data. The position of the vehicle is determined in two dimensions. Rotational orientation (heading) is determined in the plane of motion. An improved method of position and rotational orientation is presented. Based upon the determined position and rotational orientation of the vehicles stored in a map of the coordinate space, a vehicle controller, implemented as part of a computer, controls the automated vehicles through motion and steering commands, and communicates with the manned vehicle operators by transmitting control messages to each operator.
대표청구항
▼
1. A method of determining a coordinate position and rotational orientation of an object within a predefined coordinate space, the method comprising: a) providing a plurality of unique position markers having identifying indicia and positional reference indicia thereupon, the markers being arranged
1. A method of determining a coordinate position and rotational orientation of an object within a predefined coordinate space, the method comprising: a) providing a plurality of unique position markers having identifying indicia and positional reference indicia thereupon, the markers being arranged at predetermined known positional locations within the coordinate space, the known positional locations and known angular orientations being stored in a map, so that at least two position markers are always within view of the object;b) using an image acquisition system mounted on the object, acquiring an image of the at least two position markers M1, M2 within view:c) establishing a center point N1, N2 of each respective marker M1, M2 and determining a line segment N1-N2 connecting the respective center points N1, N2;d) determining a center point O of the field of view of the image acquisition system;e) determining line segments O-N1, O-N2 respectively connecting point O with the centers N1, N2 of the respective position markers M1, M2;f) using the lengths and directions of line segments O-N1, O-N2 to calculate the position of the object relative to the known positions of the markers M1, M2, thereby determining the location of the object within the coordinate space; andg) using the direction of line segment N1-N2 to calculate the rotational orientation of the object within the coordinate space. 2. The method of claim 1, wherein the predetermined known positional locations within the coordinate space of the respective position markers M1, M2 are stored in a Position Marker Look-Up Table in a memory and the center points N1, N2 of each respective marker M1, M2 used to determine the direction of line segment N1-N2 are ascertained from the Position Marker Look-Up Table. 3. The method of claim 1, wherein the positional locations within the coordinate space of the center points N1, N2 of each respective marker M1, M2 used to determine the direction of line segment N1-N2 are directly encoded on the respective position markers. 4. A method of determining a coordinate position and rotational orientation of an object within a predefined coordinate space, the method comprising: a) providing a plurality of unique position markers having identifying indicia and positional reference indicia thereupon, the markers being arranged at predetermined known positional locations within the coordinate space, the known positional locations and known angular orientations being stored in a map, the markers being spaced so that a plurality of position markers M1, M2, . . . , Mx are always within view of the object;b) using an image acquisition system mounted on the object, acquiring an image of the plurality of position markers M1, M2, . . . , Mx within a field of view of the image acquisition system;c) establishing a center point N1, N2, . . . , Nx of each respective marker M1, M2, . . . , Mx and determining line segments N1-N2, N1-N3, . . . , N1-Nx, . . . , N(x−1)-Nx connecting the respective pairs of center points;d) determining a center point O of the field of view of the image acquisition system;e) determining line segments O-N1, O-N2, . . . , O-Nx respectively connecting point O with the centers N1, N2, . . . , Nx of the respective position markers M1, M2, . . . , Mx;f) using the lengths and directions of line segments O-N1, O-N2, . . . , O-Nx to calculate the position of the object relative to each of the known positions of the markers M1, M2, . . . , Mx;g) calculating a mean value of the position of the object within the coordinate space, thereby determining the location of the object within the coordinate space;h) using the directions of line segments N1-N2, N1-N3, . . . , N1-Nx, . . . , N(x−1)-Nx within the field of view to calculate the rotational orientation of the object relative to the respective pairs M1, M2; M1, M3; . . . ; M1, Mx; . . . ; M(x−1), Mx of position markers M1, M2, . . . , Mx; andi) calculating a mean value of the rotational orientation of the object relative to the respective pairs of position markers M1, M2, . . . , Mx; thereby determining the rotational orientation of the object within the coordinate space. 5. The method of claim 4, wherein the mean value of the position of the object within the coordinate space calculated in step g) is a weighted mean value. 6. The method of claim 4, wherein each calculated position of the object relative to the known positions of the markers M1, M2, . . . , Mx of step g) is weighted according to the proximity of each position marker to the center of the field of view, the calculated positions relative to markers nearest to the center of the field of view being accorded the highest weight, thereby minimizing any effect of optical distortion in the imaging system. 7. The method of claim 4, wherein the mean value of the rotational orientation of the object relative to the respective pairs of position markers calculated in step i) is a weighted mean value. 8. The method of claim 4, wherein each calculated rotational orientation of the object relative to the directions of line segments N1-N2, N1-N3, . . . , N1-Nx, . . . , N(x−1)-Nx between respective pairs of position markers M1, M2, . . . , Mx of step i) is weighted according to the proximity of each line segment to the center of the field of view, the calculated positions relative to the line segment nearest to the center of the field of view being accorded the highest weight, thereby minimizing any effect of optical distortion in the imaging system. 9. An apparatus useful for determining a coordinate position and rotational orientation of an object within a predefined coordinate space, the apparatus comprising: a) a plurality of unique position markers, each comprising a machine-readable code, arranged in predetermined positional locations within the coordinate space such that at least two position markers are always within view of the object;b) an image acquisition system, comprised of a machine vision system, the machine vision system comprising a camera, an optional light source, and image capture electronics, mounted on the object, for acquiring an image of the position markers within view;c) an image processing system for processing pixels in the acquired image to determine the identity of each position marker, the position of each position marker relative to the object, and the rotational orientation of each position marker relative to the object;d) a computer unit for calculating the position of the object using the positions of at least two position markers and the rotational orientation of the object in the coordinate space using the positions of at least one pair of position markers; ande) a system controller for receiving the object position and the rotational orientation of the object from the computer unit, and for transmitting object position and the rotational orientation to a host system, the controller having a memory for storing:i) a predetermined map of the coordinate space, andii) the object position and rotational orientation. 10. The apparatus of claim 9, further comprising: f) a wireless data communication network for transmitting data between the computer unit and the system controller; andg) the system controller having an input device for receiving a weighting criteria, wherein the system controller transmits the weighting criteria to the computer unit, whereinthe computer unit calculates a weighted mean value of the position of the object using the positions of at least two position markers and a weighted mean value of the rotational orientation of the object in the coordinate space using the positions of at least two pairs of position markers. 11. A method for optically navigating an automated vehicle within a predefined coordinate space, the method comprising: a) creating a map of the coordinate space, the map determining allowable travel routes and locations of obstacles within the coordinate space, and storing the map within a memory in a vehicle controller;b) establishing a destination for each automated vehicle within the coordinate space and storing the identity and destination of each vehicle within the memory in the vehicle controller;c) determining the coordinate position and the rotational orientation of each vehicle within the predefined coordinate space by: i) providing a plurality of unique position markers having identifying indicia, positional reference and angular reference indicia thereupon, the markers being arranged at predetermined known positional locations and known angular orientations within the coordinate space, the known positional locations and known angular orientations being stored in the map, so that at least one position marker is within view of the vehicle;ii) using an image acquisition system mounted on each vehicle:1) acquiring an image of the at least one position marker within view;2) processing the image to determine the identity, the position relative to the vehicle, and the rotational orientation relative to the vehicle of each position marker within view; and3) calculating the position of the vehicle and the rotational orientation of the vehicle in the coordinate space and storing the position and rotational orientation information in a memory in the image acquisition system;d) transmitting the vehicle identity and the stored coordinate position and rotational orientation of that vehicle from the image acquisition system on each vehicle to the vehicle controller and storing the vehicle identity and coordinate position and rotational orientation in the memory within the vehicle controller;e) using the coordinate position and rotational orientation, the predetermined map of the coordinate space, and the destination stored within the memory in the vehicle controller, determining a desired path for each automated vehicle;f) transmitting motion and steering instructions to each automated vehicle; andg) repeating steps c) through f) until each automated vehicle reaches the established destination for that vehicle. 12. The method of claim 11, wherein, in step e) one or more alternate paths are determined for each automated vehicle. 13. The method of claim 11, wherein the plurality of unique position markers of step c) i) are spaced so that at least two position markers are always within view of the image acquisition system on each vehicle, wherein the coordinate position and rotational orientation of each vehicle is determined by: a) acquiring an image of the at least two position markers M1, M2 within view;b) establishing a center point N1, N2 of each respective marker M1, M2 and determining a line segment N1-N2 connecting the respective center points N1, N2;c) determining a center point O of the field of view of the image acquisition system;d) determining line segments O-N1, O-N2 respectively connecting point O with the centers N1, N2 of the respective position markers M1, M2;e) using the lengths and directions of line segments O-N1, O-N2 to calculate the position of the vehicle relative to the known positions of the markers M1, M2, thereby determining the location of the vehicle within the coordinate space; andf) using the direction of line segment N1-N2 to calculate the rotational orientation of the vehicle within the coordinate space. 14. The method of claim 13, wherein the predetermined known positional locations within the coordinate space of the respective position markers M1, M2 are stored in a Position Marker Look-Up Table in the memory in the vehicle controller and the center points N1, N2 of each respective marker M1, M2 used to determine the direction of line segment N1-N2 are ascertained from the Position Marker Look-Up Table. 15. The method of claim 13, wherein the positional locations within the coordinate space of the center points N1, N2 of each respective marker M1, M2 are directly encoded on the respective position markers and are used to determine the direction of line segment N1-N2. 16. The method of claim 11, wherein the plurality of unique position markers of step c) i) are spaced so that a plurality of position markers M1, M2, . . . , Mx are always within view of the image acquisition system on each vehicle, wherein the coordinate position and rotational orientation of each vehicle is determined by: a) acquiring an image of the plurality of position markers M1, M2, . . . , Mx within view;b) establishing a center point N1, N2, Nx of each respective marker M1, M2, . . . , Mx and determining line segments N1-N2, N1-N3, . . . , N1-Nx, . . . , Nx-1-Nx connecting the respective center points;c) determining a center point O of the field of view of the image acquisition system;d) determining line segments O-N1, O-N2, . . . O-Nx respectively connecting point O with the centers N1, N2, . . . , Nx of the respective position markers M1, M2, . . . , Mx;e) using the lengths and directions of line segments O-N1, O-N2, . . . O-Nx to calculate the position of the vehicle relative to each of the known positions of the markers M1, M2, . . . , Mx;f) calculating a mean value of the position of the vehicle within the coordinate space, thereby determining the location of the vehicle within the coordinate space; andg) using the directions of line segments N1-N2, N1-N3, . . . , N1-Nx, . . . , N(x−1)-Nx within the field of view to calculate the rotational orientation of the vehicle relative to the respective pairs of position markers M1, M2, . . . , Mx; andh) calculating a mean value of the rotational orientation of the vehicle relative to the respective pairs of position markers M1, M2, . . . , Mx, thereby determining the rotational orientation of the vehicle within the coordinate space. 17. The method of claim 16, wherein the mean value of the position of the vehicle within the coordinate space calculated in step f) is a weighted mean value. 18. The method of claim 17, wherein each calculated position of the vehicle relative to the known positions of the markers M1, M2, . . . , Mx of step f) is weighted according to the proximity of each position marker to the center of the field of view, the calculated positions relative to markers nearest to the center of the field of view being accorded the highest weight, thereby minimizing any effect of optical distortion in the imaging system. 19. The method of claim 17, wherein each calculated position of the vehicle relative to the known positions of the markers M1, M2, . . . , Mx of step f) is weighted according to the confidence level of the accuracy of positioning of each position marker in the field of view, the calculated positions relative to markers having the highest confidence level being accorded the highest weight. 20. The method of claim 16, wherein the mean value of the rotational orientation of the vehicle relative to the respective pairs of position markers calculated in step h) is a weighted mean value. 21. The method of claim 20, wherein each calculated rotational orientation of the vehicle relative to the directions of line segments N1-N2, N1-N3, . . . , N1-Nx, N(x−1)-Nx between respective position markers M1, M2, . . . , Mx of step h) is weighted according to the proximity of each line segment to the center of the field of view, the calculated positions relative to the line segment nearest to the center of the field of view being accorded the highest weight, thereby minimizing any effect of optical distortion in the imaging system. 22. The method of claim 20, wherein each calculated rotational orientation of the vehicle relative to the directions of line segments N1-N2, N1-N3, . . . , N1-Nx, . . . , N(x−1)-Nx between respective position markers M1, M2, . . . , Mx of step h) is weighted according to the confidence level of the accuracy of positioning of each position marker in the field of view, the calculated positions relative to the line segment having the highest confidence level being accorded the highest weight. 23. The method of claim 11, wherein a safety zone is established around each vehicle, the size and shape of the safety zone being determined dynamically based upon the size of that vehicle, its capabilities for direction of travel, speed of travel, ability to accelerate and decelerate, and ability to turn. 24. The method of claim 23, wherein the safety zone is used to calculate the path of each vehicle, so that the safety zone of that vehicle does not intersect with the locations of any obstacles in the coordinate space map. 25. An apparatus useful for optically navigating an automated vehicle within a predefined coordinate space, the apparatus comprising: a) a plurality of unique position markers, each comprising a machine-readable code, arranged in predetermined positional locations within the coordinate space such that at least one position marker is within view of the automated vehicle;b) an image acquisition system, comprised of a machine vision system, the machine vision system comprising a camera, an optional light source, and image capture electronics, mounted on the vehicle, for acquiring an image of the position markers within view;c) an image processing system for processing pixels in the acquired image to determine the identity of each position marker, the position of each position marker relative to the vehicle, and the rotational orientation of each position marker relative to the vehicle;d) a computer unit for calculating the position of the vehicle and the rotational orientation of the vehicle in the coordinate space; ande) a vehicle controller for receiving destinations for the vehicle from an input, for receiving the vehicle position and the rotational orientation of the vehicle from the computer unit, and for transmitting motion and steering instructions to the automated vehicle, the controller having a memory for storing:i) a predetermined map of the coordinate space,ii) the vehicle position and rotational orientation, andiii) the destination of the vehicle, the vehicle controller determining a desired path for the automated vehicle and for transmitting motion and steering instructions to the automated vehicle. 26. A method for picking up an object from a first, present, location and rotational orientation, transporting the object and delivering that object to a second, destination, location and rotational orientation within a predefined coordinate space by an optically navigated automated vehicle, the method comprising: a) creating a map of the coordinate space, the map determining allowable travel routes and locations of obstacles within the coordinate space, and storing the map within a memory in a vehicle controller;b) identifying the object to be transported, the present location and rotational orientation of that object and a destination location and rotational orientation of that object within the coordinate space and storing the identity, the present location and rotational orientation and the destination location and rotational orientation of the object within the memory in the vehicle controller;c) designating an automated vehicle as the delivery vehicle for the transport and delivery;d) determining a coordinate position and a rotational orientation of the delivery vehicle and all other vehicles within the predefined coordinate space by: i) providing a plurality of unique position markers having identifying indicia, positional reference and angular reference indicia thereupon, the markers being arranged at predetermined known positional locations and known angular orientations within the coordinate space, the known positional locations and known angular orientations being stored in the map, so that at least one position marker is within view of the vehicle;ii) using an image acquisition system mounted on the delivery vehicle:1) acquiring an image of the at least one position marker within view;2) processing the image to determine the identity, the position relative to the delivery vehicle, and the rotational orientation relative to the delivery vehicle of each position marker within view; and3) calculating the position of the delivery vehicle, the rotational orientation of the delivery vehicle, the positions of all other vehicles and the rotational orientation of all other vehicles in the coordinate space and storing the position and rotational orientation information in a memory in the image acquisition system;e) transmitting the delivery vehicle identity and the stored coordinate position and rotational orientation of that vehicle from the image acquisition system on the delivery vehicle to the vehicle controller and storing the delivery vehicle identity and coordinate position and rotational orientation in the memory within the controller, and transmitting the manned vehicle identities and the stored coordinate position and rotational orientation of the manned vehicles from the image acquisition system on each respective vehicle to the vehicle controller and storing the respective vehicle identities and coordinate positions and rotational orientations in the memory within the controller;f) using the predetermined map of the coordinate space, the identity, position location and rotational orientation of the object and the present position location and rotational orientation of the designated delivery vehicle stored within the memory in the vehicle controller, determining a desired path for the delivery vehicle to pick up the object;g) transmitting motion and steering instructions to the delivery vehicle;h) repeating steps d) through g) until the delivery vehicle reaches the location of the object at the rotational orientation of the object;i) transmitting motion, steering and fork control instructions to the delivery vehicle, causing the vehicle to pick up the object;j) using the predetermined map of the coordinate space, the present position location and rotational orientation of the designated delivery vehicle and the destination location and rotational orientation for that object stored within the memory in the vehicle controller, determining a desired path for the delivery vehicle to deliver the object;k) transmitting motion and steering instructions to the delivery vehicle to causing it to follow the desired path;l) repeating steps d), j) and k) until the delivery vehicle reaches the destination location and the destination rotational orientation of the object;m) transmitting motion, steering and fork control instructions to the delivery vehicle, causing the vehicle to deposit the object at the destination location;n) calculating the actual position and rotational orientation of the object when it has been deposited; ando) transmitting the actual position and rotational orientation of the delivered object to a host system. 27. The method of claim 26, further comprising, after step f): f1) using the predetermined map of the coordinate space, the identity, position location and rotational orientation of other automated vehicles and manned vehicles determining if the desired path is blocked;f2) if the desired path is blocked, selecting an alternate path. 28. The method of claim 26, further comprising, after step g): g1) determining a predicted trajectory and a safety zone for each manned vehicle and each automated vehicle by calculating the velocity and direction of travel of each vehicle from coordinate positions at successive time intervals;g2) determining any areas of intersection of the safety zone of each manned vehicle and each automated vehicle with the safety zones of other manned and other automated vehicles to predict a potential collision;g3) transmitting instructions to reduce speed, turn, or stop, to any automated vehicle that has a safety zone intersecting any safety zone of any other vehicle to prevent the predicted collision; andg4) transmitting a warning to any manned vehicle that has a safety zone intersecting any safety zone of any other vehicle to alert the operator of the manned vehicle of a predicted potential collision, so that the operator can take appropriate action to avoid the predicted collision. 29. The method of claim 28, wherein the steps d) through g1) are repeated and if no areas of intersection between the safety zones of an automated vehicle previously slowed, turned or stopped and either a manned and or another automated vehicle are subsequently determined, further comprising, after step g1):g2) transmitting instructions to the automated vehicle previously slowed, turned or stopped by repeating steps d) through g); orif no areas of intersection between the safety zones of a manned and another manned vehicle or an automated vehicle are subsequently determined, further comprising, after step g1):g2) transmitting a signal canceling the warning to the manned vehicle to alert the operator that no potential collision is predicted. 30. The method of claim 26, further comprising, after step k): k1) determining a predicted trajectory and a safety zone for each manned vehicle and each automated vehicle by calculating the velocity and direction of travel of each vehicle from coordinate positions at successive time intervals;k2) determining any areas of intersection of the safety zone of each manned vehicle and each automated vehicle with the safety zones of other manned and other automated vehicles to predict a potential collision;k3) transmitting instructions to reduce speed, turn, or stop, to any automated vehicle that has a safety zone intersecting any safety zone of any other vehicle to prevent the predicted collision;k4) transmitting a warning to any manned vehicle that has a safety zone intersecting any safety zone of any other vehicle to alert the operator of the manned vehicle of a predicted potential collision, so that the operator can take appropriate action to avoid the predicted collision. 31. The method of claim 30, wherein the steps d), j), k) and k1) are repeated and if no areas of intersection between the safety zones of an automated vehicle previously slowed, turned or stopped and the safety zones of either a manned and or another automated vehicle are subsequently determined, further comprising, after step k1): k2) transmitting instructions to the automated vehicle previously slowed, turned or stopped, by repeating steps d), j) and k); orif no areas of intersection between safety zones of a manned and another manned vehicle or an automated vehicle are subsequently determined, after step k1):k2) transmitting a signal canceling the warning to the manned vehicle to alert the operator that no potential collision is predicted. 32. An apparatus useful for controlling an optically navigated automated vehicle within a predefined coordinate space for picking up an object from a first location and rotational orientation, transporting the object and delivering that object to a second, destination, location and rotational orientation, the apparatus comprising: a) a plurality of unique position markers, each comprising a machine-readable code, arranged in predetermined positional locations within the coordinate space such that at least one position marker is within view of the automated vehicle;b) an image acquisition system, comprised of a machine vision system, the machine vision system comprising a camera, an optional light source, and image capture electronics, mounted on the vehicle, for acquiring an image of the position markers within view;c) an image processing system for processing pixels in the acquired image to determine the identity of each position marker, the position of each position marker relative to the vehicle, and the rotational orientation of each position marker relative to the vehicle;d) a computer unit for calculating the position of the vehicle and the rotational orientation of the vehicle in the coordinate space; ande) a vehicle controller for receiving destinations for the vehicle from an input, for receiving the vehicle position and the rotational orientation of the vehicle from the computer unit, and for transmitting motion and steering instructions to the automated vehicle, the controller having a memory for storing:i) a predetermined map of the coordinate space,ii) the vehicle position and rotational orientation,iii) the destination of the vehicle,f) a wireless data communication network for transmitting data between the computer unit and the vehicle controller;the vehicle controller: determining a desired path for the automated vehicle to acquire the object at the first location and rotational orientation; transmitting motion and steering instructions to the automated vehicle until the automated vehicle reaches the location and rotational orientation of the object; transmitting fork control instructions to the automated vehicle, causing the automated vehicle to acquire the object; determining a desired path for the automated vehicle to the destination of the object; transmitting motion and steering instructions to the automated vehicle until the automated vehicle reaches the destination location and rotational orientation for the object; transmitting fork control instructions to the automated vehicle, causing the automated vehicle to deposit the object at the destination location and rotational orientation. 33. A method of managing utility vehicles within a predefined coordinate space, where the vehicles may be manned (i.e., human operated) or automated, i.e., guided by machines by determining a coordinate position and rotational orientation of the vehicles and by navigating the automated vehicles, the method comprising: a) creating a map of the coordinate space, the map determining allowable travel routes and locations of obstacles within the coordinate space, and storing the map within a memory in a vehicle controller;b) establishing a destination for each manned vehicle and each automated vehicle within the coordinate space and storing the identity and destination of each vehicle within the memory in the vehicle controller;c) determining the coordinate position and the rotational orientation of each vehicle within the predefined coordinate space by: i) providing a plurality of unique position markers having identifying indicia, positional reference and angular reference indicia thereupon, the markers being arranged at predetermined known positional locations and known angular orientations within the coordinate space, the known positional locations and known angular orientations being stored in the map, so that at least one position marker is within view of the vehicle;ii) using an image acquisition system mounted on each (manned and automated) vehicle: 1) acquiring an image of the at least one position marker within view;2) processing the image to determine the identity, the position relative to the vehicle, and the rotational orientation relative to the vehicle of each position marker within view; and3) calculating the coordinate position of the vehicle and the rotational orientation of the vehicle in the coordinate space and storing the coordinate position and rotational orientation information in a memory in the image acquisition system;d) transmitting the vehicle identity and the stored coordinate position and rotational orientation of that vehicle from the image acquisition system on each vehicle to the vehicle controller and storing the vehicle identity and coordinate position and rotational orientation in the memory within the vehicle controller;e) using the coordinate position of the vehicle and the rotational orientation of the vehicle, the predetermined map of the coordinate space and the destination for each vehicle stored within the memory in the vehicle controller, determining a desired path for each automated vehicle;f) transmitting motion and steering instructions to each automated vehicle;g) determining a predicted trajectory and a safety zone for each manned vehicle and each automated vehicle by calculating the velocity and direction of travel of each vehicle from coordinate positions at successive time intervals;h) determining any areas of intersection of the safety zone of each manned vehicle and each automated vehicle with the safety zones of other manned and OF other automated vehicles to predict a potential collision;i) transmitting instructions to reduce speed, turn, or stop, to any automated vehicle that has a safety zone intersecting any safety zone of any other vehicle to prevent the predicted collision;j) transmitting a warning to any manned vehicle that has a safety zone intersecting any safety zone of any other vehicle to alert the operator of such manned vehicle of a predicted potential collision, so that the operator can take appropriate action to avoid the predicted collision; andk) repeating steps c) through j) until each automated vehicle reaches the established destination for that vehicle. 34. The method of claim 33, wherein the steps c) through h) are repeated and if no areas of intersection between the safety zones of an automated vehicle previously slowed, turned or stopped and either a manned and or another automated vehicle are subsequently determined:i) transmitting instructions to the automated vehicle previously slowed, turned or stopped, to resume its previous velocity and direction of travel;or, if no areas of intersection between a manned and another manned vehicle or an automated vehicle are subsequently determined:j) transmitting a signal canceling the warning to the manned vehicle to alert the operator that no potential collision is predicted. 35. The method of claim 33, further comprising a method of rerouting an automated vehicle within the coordinate space, the method comprising: after determining a desired path for each automated vehicle, for each iteration of step c):c1) checking the position and the rotational orientation of other vehicles to determine if the desired path is blocked;c2) if the desired path is blocked, selecting an alternate path; andc3) issuing new motion and steering instructions to each automated vehicle to follow the alternate path. 36. The method of claim 33, further comprising a method of rerouting an automated vehicle within the coordinate space, the method comprising: after determining a desired path for each automated vehicle, for each iteration of step c):c1) checking the position and the rotational orientation of other vehicles and calculating the speed and direction of travel of the other vehicles to determine if the desired path will be blocked;c2) if the desired path will be blocked, selecting an alternate path; andc3) issuing new motion and steering instructions to each automated vehicle to follow the alternate path. 37. A method of managing a mixed environment of manned vehicles and automated vehicles within a predefined coordinate space by determining a coordinate position and rotational orientation of manned vehicles and by optically navigating automated vehicles within the predefined coordinate space, the method comprising: a) creating a map of the coordinate space, the map determining allowable travel routes and locations of obstacles within the coordinate space, and storing the map within a memory in a vehicle controller;b) establishing a destination for each manned vehicle and each automated vehicle within the coordinate space and storing the identity and destination of each vehicle within the memory in the vehicle controller;c) determining the coordinate position and the rotational orientation of each vehicle within the predefined coordinate space by: i) providing a plurality of unique position markers having identifying indicia, positional reference and angular reference indicia thereupon, the markers being arranged at predetermined known positional locations and known angular orientations within the coordinate space, the known positional locations and known angular orientations being stored in the map, so that at least two position markers are within view of the vehicle;ii) using an image acquisition system mounted on each manned and automated vehicle: 1) acquiring an image of the at least two position markers within view;2) processing the image to determine the identity, the position relative to the vehicle, and the rotational orientation relative to the vehicle of each position marker within view; and3) calculating the coordinate position of the vehicle and the rotational orientation of the vehicle in the coordinate space;wherein the coordinate position and rotational orientation of each vehicle is determined by: A) acquiring an image of at least two position markers M1, M2 within view;B) establishing a center point N1, N2 of each respective marker M1, M2 and determining a line segment N1-N2 connecting the respective center points N1, N2;C) determining a center point O of the field of view of the image acquisition system;D) determining line segments O-N1, O-N2 respectively connecting point O with the centers N1, N2 of the respective position markers M1, M2;E) using the lengths and directions of line segments O-N1, O-N2 to calculate the position of the vehicle relative to the known positions of the markers M1, M2, thereby determining the location of the vehicle within the coordinate space; andF) using the direction of line segment N1-N2 to calculate the rotational orientation of the vehicle within the coordinate space; and storing the coordinate location and the rotational orientation of the vehicle in a memory in the image acquisition system;d) transmitting the vehicle identity and the stored coordinate position and rotational orientation of that vehicle from the image acquisition system on each vehicle to the vehicle controller and storing the vehicle identity and coordinate position and rotational orientation in the memory within the vehicle controller;e) using the coordinate position and rotational orientation of each vehicle, the predetermined map of the coordinate space and the destination for each vehicle stored within the memory in the vehicle controller, determining a desired path for each automated vehicle;f) transmitting motion and steering instructions to each automated vehicle;g) determining a predicted trajectory and a safety zone for each manned vehicle and each automated vehicle by calculating the velocity and direction of travel of each vehicle from coordinate positions at successive time intervals;h) determining any areas of intersection of the safety zone of each manned vehicle and each automated vehicle with the safety zones of other manned or automated vehicles to predict a potential collision;i) transmitting instructions to reduce speed, turn, or stop to any automated vehicle that has a safety zone intersecting any safety zone of any other vehicle to prevent the predicted collision;j) transmitting a warning to any manned vehicles that has a safety zone intersecting any safety zone of any other vehicle to alert the operators of such manned vehicles of a predicted potential collision, so that the operators can take appropriate action to avoid the predicted collision; andk) repeating steps c) through j) until each automated vehicle reaches the established destination for that vehicle. 38. The method of claim 37, wherein the steps c) through h) are repeated and if no areas of intersection between the safety zones of a manned and an automated vehicle are subsequently determined:i) transmitting instructions to an automatically vehicle previously slowed or stopped to resume its previous velocity and direction of travel;or if no areas of intersection between a manned and another manned vehicle are subsequently determined:j) transmitting a signal canceling the warning to the manned vehicles to alert the operators that no potential collision is predicted. 39. The method of claim 37, further comprising a method of rerouting an automated vehicle within a coordinate space, the method comprising: after determining a desired path for each automated vehicle, for each iteration of step c):c1) checking to determine if the desired path is blocked by a stationary vehicle;c2) if the desired path is blocked, selecting an alternate path; andc3) issuing new motion and steering instructions to each automated vehicle to follow the alternate path. 40. An apparatus useful for managing a mixed environment of manned vehicles and automated vehicles within a predefined coordinate space by determining a coordinate position and rotational orientation of each manned vehicle and by optically navigating each automated vehicle within the predefined coordinate space, the apparatus comprising: a) a plurality of unique position markers, each comprising a machine-readable code, arranged in predetermined positional locations within the coordinate space such that at least one position marker is within view of the automated vehicle;b) an image acquisition system, comprised of a machine vision system, the machine vision system comprising a camera, an optional light source, and image capture electronics, mounted on each vehicle, for acquiring an image of the position markers within view;c) an image processing system for processing pixels in the acquired image to determine the identity of each position marker, the position of each position marker relative to the vehicle, and the rotational orientation of each position marker relative to each vehicle;d) a computer unit on each vehicle for calculating the position of that vehicle and the rotational orientation of that vehicle in the coordinate space; ande) a vehicle controller for receiving destinations for each automated vehicle from an input, for receiving the vehicle position and the rotational orientation of each vehicle from the computer unit on that vehicle, and for transmitting motion and steering instructions to each automated vehicle, the controller having a memory for storing:i) a predetermined map of the coordinate space,ii) the vehicle position and rotational orientation of each vehicle,iii) the destination of each automated vehicle, andf) a wireless data communication network for transmitting data between the computer unit and the vehicle controller;the vehicle controller: determining allowable travel routes and locations of obstacles within the coordinate space from the predetermined map; determining a desired path for each automated vehicle using the allowable travel routes; transmitting motion and steering instructions to the automated vehicle until each automated vehicle reaches its destination; determining a predicted trajectory and a safety zone for each manned vehicle and each automated vehicle by calculating the velocity and direction of travel of each vehicle from coordinate positions at successive time intervals; determining any areas of intersection of the safety zone of each manned vehicle and each automated vehicle with the safety zones of other manned or automated vehicles to predict a potential collision; transmitting instructions to reduce speed, turn, or stop to any automated vehicle that has a safety zone intersecting any safety zone of any other vehicle to prevent the predicted collision; and transmitting a warning to any manned vehicles that has a safety zone intersecting any safety zone of any other vehicle to alert the operators of such manned vehicles of a predicted potential collision, so that the operators can take appropriate action to avoid the predicted collision. 41. The apparatus of claim 9, further comprising: the computer unit calculating a mean value of the position of the vehicle using the positions of at least two position markers and a mean value of the rotational orientation of the vehicle in the coordinate space using the positions of at least two pairs of position markers.
※ AI-Helper는 부적절한 답변을 할 수 있습니다.