Skip to content

Navigation

Components that give the robot the ability to accurately ascertaining its position and planning and following a route.

Introduction

One of the 5 high-level components of the Autopilot Inference is the Navigation stack. The Navigation stack should ensure that the robot is able to move between positions or locations, i.e. localization and navigation. We specifically distinguish between position and location, to point out that a position is defined as a geometric waypoint in a Carthesian space, while a location is defined as a semantic, real-world object. The Navigation stack may include additional features on how the robot is able to move between positions, such as the avoidance of obstacles. Further, the robot must have the proper set of information before it is able to operate. Actual information, such as environmental scans or images, is provided by the Perception stack, while general information, such as a map or policy settings, is provided by the Cognition stack. There is off course a grey area as to where the Navigation stack starts to interpret information and where another stack finishes, for example in the case of detecting obstacles. Our line of reasoning on this matter will become clear in the subsequent sections of this page on the Navigation stack.

Capabilities

The capabilities that the Navigation stack provides have been designed to support the robot in being able to localize itself in the real world and then navigate itself from one position to another in an efficient and safe manner. Efficient in the sense that trajectories are as short as possible and safe in the sense that it is able to avoid obstacles (and in future also cliffs while understanding ramps). Obstacles can be embedded in a globally defined map of the environment (provided by the Cognition stack) or in a locally defined map of the environment relative to the robot’s body frame or the sensor frame from which the data was acquired originally (provided by the Perception stack). Therefore, the capabilities of the Navigation stack are related to localization, path-planning and locomotion.

 

Move through poses is a scheduling capability that is executed when the behavior MoveThroughPoses is triggered by the Ordination stack. The capability requires a list of goal waypoints as input and it will then schedule three capabilities, i.e., Localization, Path planning and Locomotion, which will be presented later on this page. Firstly, the current position of the robot is acquired, which is then sent to the path planner along with the list of goal waypoint. The path that is returned by the path planner is then sent to the Locomotion capability of the Navigation stack which is responsible that the robot follows that path from start to finish. In case Locomotion is not able to complete the path, for example due to a large object, than the path is re-planned once and then again sent to the Locomotion capability. In case Locomotion fails a second time, then the MoveThroughPoses behavior in this example fails and is aborted. In reality, the Origin One is allowed to recompute a new path five times.

NavigationMoveThroughPoses

 

Move along path is a scheduling capability that is executed when the behavior MoveAlongPath is triggered by the Ordination stack. The capability requires a preplanned path that the robot is required to follow and it will then schedule two capabilities, i.e., Localization and Locomotion, which will be presented later on this page. Firstly, the current position of the robot is acquired, which is then sent to the path planner along with starting position of the path. The resulting path is then sent to the Locomotion capability and will ensure that if the robot was not yet near the path that it is required to follow, that it will plan a path towards that preplanned path. For safety reasons the robot will only plan such a path when the distance to the pre-planned path is less than 3 meters, otherwise it will report a fail. When arriving at the starting position of the preplanned path the Locomotion capability is again called but now with the preplanned path. Also, an additional policy-parameter is provided indicated that the robot is constrained to follow this preplanned path (and not deviate from this path too much). TThe Locomotion capability will then try to complete this preplanned path. In case its fails, for example because a large object is blocking its path, then it will re-assess the situation for one minute. In case the obstacle is removed, then the Locomotion continues with the path, otherwise it will fail and the entire behavior is aborted.

NavigationMoveAlongPath

 

Cover area is a scheduling capability that is executed when the behavior CoverArea is triggered by the Ordination stack. The capability requires a uuid of the polygon of the area that it should cover and, optionally, the uuids of other polygons either marking the go-area of the robot or (multiple) nogo-areas. In addition, it may also take into account the localization map so that it won't plan paths on occupied areas. It will then schedule three capabilities, i.e., Localization, Coverage path planning and Locomotion, which will be presented later on this page. Firstly, the current position of the robot is acquired, along with the polygon(s) that mark the different types of areas, and possibly also the localization map. All is then sent to the coverage path planner which will plan a lawnmower path and two outer contour paths (unless stated otherwise in the behavior's policy), thereby avoiding any nogo-areas and occupied areas while maintaining within the goto-area. The resulting path is then sent to the Locomotion capability, for which it is best to define the "FollowPathStrictly" controller as that supports the required U-turns, by which the robot will drive along the planned path. Additional policy-parameters can be provided to indicate that the robot is constrained to follow this preplanned path (and not deviate from this path too much on distance to the path and speed for following the path - although when selecting "FollowPathStrictly" the speed will be set to a low value always). TThe Locomotion capability will then try to complete this preplanned path. In case its fails, for example because a large object is blocking its path, then it will report a failure and the entire behavior is aborted.

NavigationMoveAlongPath

 

Localization is a capability in which the robot is able to combine pose information from different sources of information into a single estimated pose of the robot. This estimated pose will initially start in a local coordinate system, i.e., the starting point of the robot, but the robot must have an estimated pose in a global coordinate system before it is allowed to move. Such an initial global position can be obtained from an Aruco marker or from an RTK-GNSS fix. After that, the robot can combine position estimates from other Aruco markers, from a stream of RTK-GNSS fixes or from an algorithm that matches the lidar scan of the Perception stack to a known map of the environment. The illustrations below illustrate these three different concepts used by the robot for estimating a pose in the global coordinate frame, which are merged into a single pose estimated to increase robustness and accuracy.

NavigationPoseEstimation

 

Path planning is a capability in which the robot is able to plan an entire path, or trajectory, between two waypoints, i.e., the robot’s current position and a position destination. For now, The Autopilot Inference only supports path-planning when both the robot and its destination can be pinpointed on the map that is provided by the Cognition stack, which is typically a pre-recorded occupancy map. The path is the result of an optimization step on the following conditions:

  • The path should be as short as possible.
  • The path should be maneuverable for the robot, i.e., safe so that nothing is hit.
    • Which now includes the footprint of the robot, and may include terrain analysis in future

Specific Go and NoGo areas should be embedded in the map that is provided to the planner and can therefore not be explicitly communicate to the planner. This means that the robot is allowed to go anywhere on the map, whereas areas that the robot is not able or allowed to enter are labeled as occupant or unknown. Also, the map that is used by the planner can be updated with actual information, such as ad-hoc object that occupy certain areas that were not present when the map was recorded.

The illustration below depicts these different aspects for path planning, with the prior known (recorded) map retrieved form the Cognition stack, which is merged with occupancy information that is derived from the lidar scan, and then further overlayed with a safety zone that is in that case based on the robot’s footprint.

NavigationPathPlanning

 

Locomotion, or moving, is a capability in which the robot receives a path, or trajectory, from the planner and then control the robot along that path by producing a reference speed for the robot periodically in time. However, the robot may not assume that the trajectory is completely valid, as the situations might have changed in areas where the (updated) map is outdated due to ad-hoc or moving objects. Therefore, the robot keeps track of a local map, that is up to date, in which its current vicinity of 4x4 meters is modeled into areas of free space and occupied spaces. This model is updated 5 times a second so that object that are slowly moving can be avoided as well. The robot will create a local path of 4 meters ahead of its current position in which there is some slack as to how rigid in needs to follow the original trajectory it received versus the a policy to avoid obstacles. This policy depends on the type of obstacle and whether or not the robot received a path that it is constrained to follow:

In case of moving with a path constraint

  • Small sized obstacles up to 10cm can be avoided either directly or after a few attempts of trying out different local paths.
  • Medium and large sizes obstacles (more than 15 cm) will not be avoided as the robot will need to deviate too much from its path constraint. Therefore, the robot will stop and wait until the object is not there anymore. If, after 1 minute the object is still blocking its path, then the GoTo task, and corresponing job, will be cancelled.
    • Note that in case the medium sized obstacle is not dead-center on its path, than it may be possible that the robot will avoid this obstacle as well.

In case of moving without any path constraint

  • Small and medium sized obstacles up to 50cm can be avoided either directly or after a few attempts of trying out different local paths.
  • Large sized obstacles up to 4 meter cannot be avoided as the robot is not allowed to plan a local path that it too far off its original path it received from the planner. The robot will try several attempts, for about 15 seconds, after which it will not have succeeded in avoiding the obstacle. The robot returns a failure, after which a new complete path is being generated by the planner. As the large obstacle is now also known to the robot’s path planner it should plan a new path in which the large obstacle is avoided.

NavigationMoving



Info

Avular collaborators are authorized to read the details of the developments, which is found on the Development-Navigation