Skip to content

Navigation

Components that give the robot the ability to accurately ascertain its position
and plan and follow a route.

Introduction

The Navigation stack is one of the five high-level components of the Autopilot Inference. It is responsible for ensuring that the robot can move in its environment using its localization and navigation functionalities. In both localization and navigation, a distinction between position and location is made where a position refers to a specific geometric point in Cartesian space, and a location refers to a real-world object or context with semantic meaning.

The Navigation stack is responsible for determining how the robot moves, which might include features like obstacle avoidance. However, for it to operate effectively, the robot needs access to the right information. This includes actual data, such as sensor readings, images, and environmental scans, comes from the Perception stack, and general data, like maps or policy configurations, is provided by the Cognition stack.

There is some overlap between these components. For example, interpreting whether something is an obstacle can involve both perception and navigation. Our approach to these boundaries—particularly where the Navigation stack begins and other stacks end—will be explained further in the upcoming sections on the Navigation stack.

Capabilities

The capabilities that the navigation stack provides have been designed to support the robot in being able to map an environment, localize itself in the real world and then navigate itself from one position to another in an efficient and safe manner. Efficient in the sense that trajectories are as short as possible and safe in the sense that it is able to avoid obstacles (and in future also cliffs while understanding ramps). Obstacles can be recorded in a globally defined localization map of the environment (externally provided as an occupancy map by the cognition stack), or in a locally defined map of the environment relative to the robot’s body frame (externally provided as a LaserScan by the perception stack). Further, the navigation stack also assumes that the TF-tree and odometry information of the robot is available, while it commands the robot via velocity setpoints.

Therefore, the capabilities of the navigation stack involve specific behaviors on how to move, such as moving through poses, moving along a prescribed path, covering an area and docking, while also involve the software functionalities by which those behaviors can be executed, such as mapping, localization, path-planning and locomotion.

 

Move through poses is a scheduling capability that is executed when the behavior MoveThroughPoses is triggered for execution. The capability requires a list of goal waypoints as input and calls some specific functionalities of the other capabilities in a particular order, i.e., localization, path planning and locomotion, which will be presented later on this page. This moving capability acquires the robot's position, the list of goal waypoints and the occupancy map that the robot uses for localization (and planning). All positions and waypoints should be on the position. After planning a path that passes all waypoints, the moving behavior will continue with moving along that path while directly avoiding small and medium sized obstacles. When a large obstacle is encountered, then the robot will try to move around and, when it is not able to, it will replan a path. Also, while moving, the robot will keep track of its localization accuracy and stop to wait for a better accuracy in case its position estimate becomes too uncertain.

If and how obstacles are being avoided depends on the controller that is being selected, which can be "FollowPathLoosely", "FollowPathroughly" and "FollowPathStrictly". These controllers differ in how far they may deviate from the path, which also results in a more swift robot or in a robot that is carefully moving along its path.

MoveThroughPoses

 

Move along path is a scheduling capability that is executed when the behavior MoveAlongPath is triggered for execution. The capability requires a preplanned path that the robot is expected to follow and it will then call some specific functionalities of two capabilities in a particular order, i.e., localization and locomotion, which will be presented later on this page. After acquiring the robot's current position and the given path, this moving capability will check where the robot is wrt to the path. In case the robot is not on the path yet, then it will plan a short path of max 3 meters to drive the robot to the path. Once the robot is on the path, which may also be halfway, it will continue with moving along that path while directly avoiding small and medium sized obstacles. When a large obstacle is encountered, then the robot will try to move around but due to the fact that the robot is not allowed to move away from the given path too much, it is unlikely to be able to avoid the large obstacle. Therefore, it will wait for half a minute and if the large obstacle is still there it will abort the moving capability (otherwise it will continue its path). Also, while moving, the robot will keep track of its localization accuracy and stop to wait for a better accuracy in case its position estimate becomes too uncertain.

If and how obstacles are being avoided depends on the controller that is being selected, which can be "FollowPathLoosely", "FollowPathroughly" and "FollowPathStrictly". These controllers differ in how far they may deviate from the path, which also results in a more swift robot or in a robot that is carefully moving along its path.

MoveAlongPath

 

Cover area is a scheduling capability that is executed when the behavior CoverArea is triggered for execution. The capability requires a uuid of the polygon of the area that it should cover and, optionally, the uuids of other polygons either marking the go-area of the robot or (multiple) nogo-areas. In addition, it may also take into account the localization map so that it won't plan paths on occupied areas. It will then call some specific functionalities of three capabilities in a particular order, i.e., localization, coverage path planning and locomotion, which will be presented later on this page. After acquiring all the necessary polygons the capability will plan a path by which it will cover the given area. Typically a lawnmower pattern is planned, of which the angle can be set by a policy, unless the capability is triggered to deviate from the lawnmower pattern at which it will plan paths from outside the polygon's contour further inside. The robot will then move along that path as avoid small and medium obstacles. Also here the avoidance maneuvers depend on the controller that is being selected, i.e., "FollowPathLoosely", "FollowPathroughly" or "FollowPathStrictly". We advice for outdoors operations and operations where ad-hoc obstacles may be present to select the "FollowPathRoughly, while for indoor operations without any obstacle "FollowPathStrictly".

CoverArea

 

Docking is a scheduling capability that is executed when the behavior Docking or Undocking is triggered for execution. The Undocking capability does not require any input and will directly call the locomotion capability to drive 1 meter backwards. Undocking can be triggered at any moment of the operation when the robot drove and stopped close, or even touched, walls or other objects. At those situations you should first use Undocking, or FollowPath, so that the robot will drive in a more open space before it will be able to continue its operation. The other Docking capability requires the ID of the Aruco marker at which it needs to dock and the distance it needs to maintain with respect to the marker. Please note that a minimal distance of 0.45 meter is required, since the robot will measure its this distance to the object from its base-link, which for the Origin One is located some 45 cm to the center from the tip of the nose. The Docking capability does not have obstacle avoidance, so please ensure that there are not objects in front of the docking location. At first, Docking will ask from the perception stack to continuously publish the goal pose at which the robot needs to dock, i.e., the pose of the marker with the additional set distance.

To ensure that the marker is detected in the camera image, we advice to start the docking capability only when the robot is within 1.5 to 3 meters away from the dock and within an angle of 30 degrees.

Docking

 

Localization is a functional capability in which the robot is able to combine pose information from different sources of information into a single estimated pose of the robot. This estimated pose will initially start in a local coordinate system, i.e., the starting point of the robot, but the robot must have an estimated pose in a global coordinate system before it is allowed to move. Such an initial global position can be obtained from an Aruco marker or from an RTK-GNSS fix. After that, the robot can combine position estimates from other Aruco markers, from a stream of RTK-GNSS fixes or from an algorithm that matches the lidar scan of the perception stack to a known map of the environment. The illustrations below illustrate these three different concepts used by the robot for estimating a pose in the global coordinate frame, which are merged into a single pose estimated to increase robustness and accuracy.

PoseEstimation

 

Path planning is a functional capability in which the robot is able to plan an entire path, or trajectory, between two waypoints, i.e., the robot’s current position and a position destination. For now, this planning function only supports path-planning when both the robot and its destination can be pinpointed on the localization map that is provided by the cognition stack, which is typically a pre-recorded occupancy map. The path is the result of an optimization step on the following conditions:

  • The path should be as short as possible.
  • The path should be maneuverable for the robot, i.e., safe so that nothing is hit.
    • Which now includes the footprint of the robot, and may include terrain analysis in future

Specific Go and NoGo areas should be embedded in the map that is provided to the planner and can therefore not be explicitly communicate to the planner. This means that the robot is allowed to go anywhere on the map, whereas areas that the robot is not able or allowed to enter are labeled as occupant or unknown. Also, the map that is used by the planner can be updated with actual information, such as ad-hoc object that occupy certain areas that were not present when the map was recorded.

The illustration below depicts these different aspects for path planning, with the prior known (recorded) map retrieved form the cognition stack, which is merged with occupancy information that is derived from the lidar scan, and then further overlayed with a safety zone that is in that case based on the robot’s footprint.

PathPlanning

 

Locomotion, or moving, is a functional capability in which the robot receives a path, or trajectory, from the planner and then control the robot along that path by producing a reference speed for the robot periodically in time. However, the robot may not assume that the trajectory is completely valid, as the situations might have changed in areas where the (updated) map is outdated due to ad-hoc or moving objects. Therefore, the robot keeps track of a local map, that is up to date, in which its current vicinity of 4x4 meters is modeled into areas of free space and occupied spaces. This model is updated 5 times a second so that object that are slowly moving can be avoided as well. The robot will create a local path of 4 meters ahead of its current position in which there is some slack as to how rigid in needs to follow the original trajectory it received versus the a policy to avoid obstacles. This policy depends on the type of obstacle and whether or not the robot received a path that it is constrained to follow:

In case of moving with a path constraint

  • Small sized obstacles up to 10cm can be avoided either directly or after a few attempts of trying out different local paths.
  • Medium and large sizes obstacles (more than 15 cm) will not be avoided as the robot will need to deviate too much from its path constraint. Therefore, the robot will stop and wait until the object is not there anymore. If, after 1 minute the object is still blocking its path, then the GoTo task, and corresponding job, will be cancelled.
    • Note that in case the medium sized obstacle is not dead-center on its path, than it may be possible that the robot will avoid this obstacle as well.

In case of moving without any path constraint

  • Small and medium sized obstacles up to 50cm can be avoided either directly or after a few attempts of trying out different local paths.
  • Large sized obstacles up to 4 meter cannot be avoided as the robot is not allowed to plan a local path that it too far off its original path it received from the planner. The robot will try several attempts, for about 15 seconds, after which it will not have succeeded in avoiding the obstacle. The robot returns a failure, after which a new complete path is being generated by the planner. As the large obstacle is now also known to the robot’s path planner it should plan a new path in which the large obstacle is avoided.

Moving

 

Mapping, is a functional capability in which the robot is simultaneous locating and mapping the environment based on the laser scan that is made available by the perception stack. The robot may only start the mapping functionality when it has an accurate starting position, for example in front of a marker or several RTK-GNSS fixes while making a short drive. While mapping the laser scan is used to determine which grid-cells on a grid-map are occupied, which are free and which are unknown. Also, the laser scan is used to keep track of the robot's current position in that same map. To maintain an accurate position and map the speed of the robot is reduced.

Mapping