Autonomous capabilities
The autonomous capabilities of the robot, that have been implemented by Autopilot Inference, are focussed on moving and -in future- interacting with the real world. Moving in the real world is often a pre-requisite for conducting more complex jobs, such as follow-me, inspection of assets, or surveying an area. These complex jobs can be designed through our user interface, called Cerebra Studio, in which a user is able to schedule a series of tasks. The robot uses its autonomous capabilities to move from one location to the next, executing the tasks that the user has specified.
The current autonomous capabilities of the Autopilot Inference are localization, navigation, and obstacle avoidance. These capabilities are introduced below.
Localization
The Autopilot Inference exploits three approaches by which the robot is able to keep track of its position during an operation. All three approaches require that the robot also keeps track of its, so called, odometry, which is a position of the robot with respect to the real-world position at which the robot was turned on by integrating its IMU data (accelerometers and gyroscope) as well as the wheel encoders. The final position that is estimated by the robot is a combination of all four approaches
Pose estimates |
|
---|---|
Marker based pose estimation is an approach by which a known Aruco marker is detected by the RGB camera of the robot along with the relative position of the marker to the robot. The marker has a known position in the global frame, and therefore the pose of the robot in the global frame can be estimated by adding the relative pose between the marker and the robot to the global pose of the marker. | |
Map based pose estimation is an approach by which the LiDAR of the robot is turned into a laserscan, i.e., by projecting points in 3D onto a 2D horizontal plane, and then match the measured range at the different angle to the pre-recorded localization map of the environment. | |
GNSS based pose estimation is an approach by which the measured latitude and longitude are turned into a Carthesian position by subtracting a prior known latitude and longitude of the global frame's origin. |
Navigation
The Autopilot Inference has the three approaches by which the robot is able to navigate from its starting position to a destination. The precondition on when such navigation is eligible depends on the information available: is there a localization map, i.e., a map as was created using the mapping workflow, and is the position of the robot in the global frame available, i.e., a position that extends the odometry position of the robot with one of the localization approaches into a predefined global frame. Depending on what is available different navigation approaches are feasible.
Navigation approach |
Waypoint, Path, Costmaps |
---|---|
Map-constrained autonomy is a navigation approach in which the robot first plans a path on a global costmap to a received, set destination, where the global costmap is initialized on the occupied areas (high cost) of the localization map. The robot then moves along that self-planned path via its controller. The controller that is used for moving along the self-planned path spawns a number of trajectories with a short horizon (up to 2-3 meters) and selects that trajectory that stays the closest to the self-planned path, unless a local costmap tells otherwise. The local costmap is a short horizon map of 2-3 meters surrounding the robot that keeps track of occupied areas based on its LiDAR information. Trajectories through or near occupied areas are discarded from selection. As the name suggests, this navigation approach requires a localization map, where both the current position of the robot and the set destination are on the map, since the robot is constrained to planning paths that are within this map. It is also required that the robot is able to localize itself accurately in the global frame, or map frame, from start to destination.. |
|
Path-constrained autonomy is a navigation approach in which the robot receives a path, and than moves along that received path via its controller. Similar to the prior navigation approach, the controller spawns a number of trajectories with a short horizon (up to 2-3 meters) and selects that trajectory that stays the closest to the preplanned path, yet always within a set maximum distance to the path, unless a local costmap tells otherwise. The local costmap is a short horizon map of 2-3 meters surrounding the robot that keeps track of occupied areas based on its LiDAR information. Trajectories through or near occupied areas are discarded from selection (just like trajectories that exceed the maximum allowable distance from the path). As the name suggests, this navigation approach requires a path, and can thus be used without a localization map, where the robot is constrained stay within a set distance of this path. It is also required that the robot is able to localize itself accurately in the global frame, or map frame, from start to destination. |
|
Nearby-constrained autonomy (under construction) is a navigation approach in which the robot receives a nearby waypoint, i.e., within 2-3 meters from its current position. The position of the waypoint is defined relative to the robot, where (2.0, 0.0) marks a position that is 2 meters in front of the robot, while (0.0, 2.0) marks a position that is 2 meters to its right. No path planning is done by the robot. Instead, the controller spawns a number of trajectories to the nearby waypoint and than uses the local costmap to determine the most feasible trajectory, i.e., avoiding known, occupied areas. If the waypoint is behind an object, then the controller might have difficulty with select a feasible trajectory. As the name suggests, this navigation approach requires a waypoint that is relative to the robot and does not exceed a distance of 2-3 meters. Further note that there is not need for a position of the robot in the global frame, so this option can be used when there is no map, or RTK-GNSS. |
Obstacle avoidance
We distinguish between 4 types of ad-hoc objects that the robot may avoid, which are small-sized objects, medium sized objects, large object and dynamic objects. Herein, the term "ad-hoc" means that these objects are not present on the floorplan of the environment as it is used by the robot, i.e., the localization map. Also, whether or not the robot will avoid the obstacle depends on whether its autonomy is map-constrained or path-constrained, as the robot has more authorizing for re-planning a completely new map when it is map-constrained while for path-constrained it should stick to a specific path and not cross any distance limitations between path and short-horizon trajectories.
Avoidance |
Robot in local costmap |
---|---|
Directly avoid small-sized obstacles up to 15 cm. This capability is available for map-constrained as well as for path-constrained autonomous navigation, yet only when selecting the FollowPathLoosely controller (hence not perse when selecting the FollowPathStrictly unless the robot is map-constrained). The idea is that the small object becomes visible as occupied area on the local costmap, and that there there will at least be one trajectory automatically generated that avoids the obstacle while satisfying the other policies (speed limit and maximum distance to the planned/received path). Off course when the maximum distance to the path is very constrained, e.g., 30cm, then it is unlikely that FollowPathLoosely nor FollowPathStrictly will find a suitable trajectory. And this the controller, and therefore the behavior/task that exploits this controller, reports a failure. | |
Keep on generating new trajectories for avoiding medium-sized obstacles of 15 to 50 cm. Also this capability is available for map-constrained as well as for path-constrained autonomous navigation, yet only when selecting the FollowPathLoosely controller (hence not perse when selecting the FollowPathStrictly unless the robot is map-constrained). The idea is that the medium-sized object becomes visible as occupied area on the local costmap, and that it will take some some before a trajectory that was automatically generated also avoids the obstacle while satisfying the other policies (speed limit and maximum distance to the planned/received path). Each time that a new set of trajectories is being generated, the robot will have moved slightly across a prior trajectory that was not fully satisfactory. Eventually, it is expected that a suitable trajectories is found, while the robot keeps on trying our different ways to approach the objects, and that the obstacle is avoided. | |
Re-plan and create a new path for avoiding large obstacles up to 2-3 meters This capability is only available for map-constrained autonomous navigation, as it requires to recompute a path, which is not allowed in path-constrained autonomy. Also, it is available for both the FollowPathLoosely as well as the FollowPathStrictly controller, since the controller actually has little effect on this behavior. The idea is that the large-sized object becomes visible as occupied area on the local costmap and that, even after trying, no trajectory that is automatically generated will also avoid the obstacle while satisfying the other policies (speed limit and maximum distance to the planned/received path). Eventually, the controller returns a failure, but for map-constrained autonomous navigation the behavior is allowed to (once or twice) plan a new path from the current position of the robot to the destination. This large obstacle will now also have been be added to the global costmap, implying that a new path will likely be planned around the obstacle in the case that the robot has obtained a good estimate of the obstacle in size and shape. | |
Keep on generating new trajectories for moving through a narrow opening (of at least 1 meter) Also this capability is available for map-constrained as well as for path-constrained autonomous navigation, and in principle for both the FollowPathLoosely controller as well as the FollowPathStrictly, although it depends on how the actual path is defined (or planned) with respect to the opening. Perfectly through the center of the opening gives the highest probability that the robot is able to pass the narrow opening. The idea is that either immediately, or after some trial when the path is not dead-center through the opening, a trajectory is automatically generated also avoids both edges of the opening far enough while satisfying the other policies (speed limit and maximum distance to the planned/received path). Each time that a new set of trajectories is being generated, the robot will have moved slightly across a prior trajectory that was not fully satisfactory. Eventually, it is expected that a suitable trajectories is found, While the robot keeps on trying our different ways to approach the opening, and that the robot passes the opening. |
Info
Even though the robot encounters a small obstacle, it might occurs that its behavior is similar to avoiding a medium sized obstacle. Or likewise, that encountering a medium sized obstacles results in a behavior that is attributed to large sized obstacles. The reason that this might occur is that our expectations about how an obstacle is avoided heavily depends on the actual situation and on some randomization. Therefore, the robot will exhibit different behavior each time that even the same obstacle is encountered in a similar situation.
Warning
When the robot encounters a slowly moving object, such as a human, then it will try the avoid the moving object, typically by driving backwards slowly. However, be aware that this behavior is not guaranteed and if you encounter the robot too fast, then it will hit you.
Info
Please continue to the high level design of the Autopilot Inference.