Skip to content

Autonomous capabilities

The Autopilot Inference is a collection of software components that provide the robot with the ability to move and interact with the real world. Moving in the real world is often a prerequisite for conducting more complex jobs, such as follow-me, inspection of assets, or surveying an area. The autonomous capabilities allow the robot to move from one location to another, while avoiding obstacles, to execute a series of tasks as designed by the user. The tasks are designed through our desktop application called Cerebra Studio, in which the user can schedule a series of tasks.

The Autopilot Inference (currently) provides the following autonomous capabilities for the robot: - Localization - Navigation - Obstacle avoidance - Docking

Some of these capabilities require a so-called "localization map", which is a grid-based map indicating occupied and free space in the environment. Users can create such a map of the environment in which the robot will operate by following the instructions stated here: Origin One: Getting Started - Mapping

Localization

Localization is the ability to localize oneself in indoor and outdoor -known- environments.

The Autopilot Inference uses a combination of four approaches to keep track of the robot's position during an operation. The first approach is odometry, which is a position of the robot with respect to the real-world position at which the robot was turned on (by integrating its IMU data, such as accelerometers and gyroscope, and wheel encoders).

The other three approaches are briefly explained in the table below.

Pose estimates
Marker-based pose estimation: the robot detects a known Aruco marker using the RGB camera and estimates the relative position of the marker to the robot. The marker has a known position in the global frame, and therefore the pose of the robot in the global frame can be estimated by adding the relative pose between the marker and the robot to the global pose of the marker. MarkerBasedPose
Map-based pose estimation: the 3D LiDAR data of the robot is projected onto a 2D horizontal plane, allowing the robot to measure the range of free space at different angles. The laser scan is then matched against the pre-recorded localization map of the environment. MapBasedPose
GNSS-based pose estimation: the measured latitude and longitude are converted into a Cartesian position by subtracting a prior known latitude and longitude of the global frame's origin. GnssBasedPose
Navigation is the ability to plan and to move from any current -known- position to any -known- destination.

The Autopilot Inference provides three navigation approaches to move the robot from its starting position to a destination. The navigation approach depends on the information available: whether a localization map is available, and whether the robot's position in the global frame is known. The robot's position can be known from a variety of sources, including a known Aruco marker, a map, or GNSS measurements. Depending on the available information, the Autopilot Inference uses one of the following navigation approaches:

Navigation approach
Waypoint, Path, Costmaps
Map-constrained autonomy is a navigation approach in which the robot first plans a path on a global costmap to a set destination, where the global costmap is initialized on the occupied areas (high cost) of the localization map. The robot then moves along that self-planned path via its controller. The controller that is used for moving along the self-planned path spawns a number of trajectories with a short horizon (up to 2-3 meters) and selects that trajectory that stays the closest to the self-planned path, unless a local costmap tells otherwise. The local costmap is a short horizon map of 2-3 meters surrounding the robot that uses LiDAR measurements to detect occupied areas. Trajectories through or near occupied areas are discarded from selection.

As the name suggests, this navigation approach requires a localization map, where both the current position of the robot and the set destination are on the map, since the robot is constrained to planning paths that are within this map. Further, while driving, the robot should maintain an accurate estimation of its own position in the global frame, or map frame, from start to destination.
MapAutonomy
Path-constrained autonomy is a navigation approach in which the robot receives a path, and then moves along that received path via its controller. The controller spawns a number of trajectories with a short horizon (up to 2-3 meters) and selects that trajectory that stays the closest to the preplanned path, yet always within a set maximum distance to the path, unless a local costmap tells otherwise. The local costmap is a short horizon map of 2-3 meters surrounding the robot that uses LiDAR measurements to detect occupied areas. Trajectories through or near occupied areas are discarded from selection (just like trajectories that exceed the maximum allowable distance from the path).

As the name suggests, this navigation approach requires a path, and can thus be used without a localization map, such as a satellite map, or when moving from one map to another. The robot is (soft) constrained to stay within a set distance of this path. Further, while driving, the robot should maintain an accurate estimation of its own position in the global frame, or map frame, from start to destination.
PathAutonomy
Nearby-constrained autonomy is a navigation approach in which the robot receives a nearby waypoint, i.e., a position within a distance of 2-3 meters from its current position. The position of the waypoint is defined relative to the robot, where (2.0, 0.0) marks a position that is 2 meters in front of the robot, while (0.0, 2.0) marks a position that is 2 meters to its right. The controller spawns a number of trajectories to the nearby waypoint and then uses the local costmap to determine the most feasible trajectory, i.e., avoiding known occupied areas. If the waypoint is behind an object, then the controller might have difficulty in selecting a feasible trajectory.

This navigation approach does not require a position of the robot in the global frame, so this option can be used when there is no map, or RTK-GNSS.
NearbyAutonomy

Info

The robot uses one of three predefined control strategies to move between start and destination:

  1. FollowPathLoosely: The robot is allowed to coarsely drive the path that was planned or set for it. It has the freedom to move around ad-hoc obstacles.
  2. FollowPathRoughly: The robot is allowed to deviate from the path to avoid obstacles, but it will never move away too much.
  3. FollowPathStrictly: The robot is not allowed to deviate from the path. It will therefore drive more slowly, yet also more accurately across the path that was planned or set for the robot.

Each control strategy is associated with a Behavior of the robot, which influences how obstacles are avoided.

Obstacle avoidance

Obstacle avoidance is the ability to prevent the robot from hitting objects while moving.

We distinguish between 4 types of ad-hoc objects that the robot may avoid, which are small-sized objects, medium-sized objects, large objects, and narrow openings. Herein, the term "ad-hoc" means that these objects are not present on the floorplan of the environment as it is used by the robot, i.e., the localization map. Also, whether or not the robot will avoid the obstacle depends on whether its autonomy is map-constrained or path-constrained, as the robot has more authorization for re-planning a path when it is map-constrained, while when the robot is path-constrained it should stick to a specific path and not cross any distance limitations between path and short-horizon trajectories.

Avoidance
Robot in local costmap
Avoid small-sized obstacles up to 15 cm. This capability is available for map-constrained as well as path-constrained autonomous navigation, when selecting the FollowPathLoosely or FollowPathRoughly controller. The idea is that small objects become visible as occupied areas on the local costmap, and a trajectory is automatically generated by the controller that avoids the obstacles while satisfying the other policies (speed limit and maximum distance to the path). If the maximum distance to the path is very constrained, e.g., 30cm, it is unlikely that a suitable trajectory will be found, and the robot will report a failure. SmallObstacle
Avoid medium-sized obstacles of 15 to 50 cm by generating new trajectories. This capability is available for map-constrained as well as for path-constrained autonomous navigation, yet only when selecting the FollowPathLoosely controller (although one may also expect some avoidance from the FollowPathRoughly controller, but not from FollowPathStrictly). The idea is that, to avoid the object, the controller will keep on generating new trajectories until one is found that goes around occupied areas and also satisfies the other policies (speed limit and maximum distance to the planned/received path). While the robot keeps on trying out different avoidance maneuvers, it is expected that eventually a suitable trajectory is found and that the obstacle is avoided. MediumObstacle
Avoid large obstacles by re-planning and creating new paths. This capability is only available for map-constrained autonomous navigation independent on which controller was selected, i.e., FollowPathLoosely, FollowPathRoughly, FollowPathStrictly. The idea is that, to avoid the object, the robot will keep on generating new trajectories until one is found that goas around any occupied areas. However, while the robot keeps on trying out different avoidance maneuvers, no suitable trajectory shall ever be found as the object just too large. Which is why after some time-period of trying out different maneuvers the controller will return a fail for the initially planned path. Fortunately, for map-constrained autonomous navigation, the behavior may (once or twice) plan a new path from the current position of the robot, which will now be on front of the large obstacle, to the destination. And since the obstacle is now also present on the occupancy map of the robot, the newly planned path shall (most likely) avoid the large object immediately LargeObstacle
Pass through narrow openings (of at least 1 meter). This capability is available for both map-constrained and path-constrained autonomous navigation, and in for all controllers (FollowPathLoosely, FollowPathRoughly, and the FollowPathStrictly) depending how the path is defined with respect to the opening. The idea is that the robot will keep on generating new trajectories while trying to pass through the narrow opening satisfying the other policies (speed limit and maximum distance to the planned/received path). The robot will move slightly across a prior trajectory that was not fully satisfactory, and eventually, it is expected that a suitable trajectory is found, and the robot will pass the opening. ThroughOpening

Info

The robot's behavior when encountering small, medium, or large obstacles can be influenced by various factors, such as the actual situation, randomization, and the robot's speed. On encountering a small-sized obstacle, the robot might behave similar to avoiding a medium-sized obstacle, or likewise when encountering a medium-sized obstacle behave similar to avoiding a large-sized obstacle. Hence, the robot might exhibit different behavior even when the same obstacle is encountered in a similar situation.

Warning

When the robot encounters a slow moving object, such as a human, then it will try to avoid the moving object, typically by driving backwards slowly. However, be aware that this behavior is not guaranteed and if you encounter the robot too fast, then it will hit you.

Docking

Docking is the ability to accurately position oneself with respect to a real-world object.

The Autopilot Inference uses Aruco markers to obtain an initial position and as a visual target. In the first case, the position of the marker needs to be available in the robot's knowledge base, while in the second case, it can be any Aruco marker generated from a specific source without the need to add any information of that marker in the robot's knowledge base. Aruco markers can be generated from this source.

For this capability, we distinguish between docking in front of a marker and undocking from a marker.

Docking and Undocking
Robot in local costmap
Docking in front of a marker. This capability operates independently of FollowPathXXX, maps, or the robot's position in the global frame. It only requires that an Aruco marker (20x20 cm in size) is visible in the robot's RGB image. For successful operation the robot must be 1.5 to 3 meters away from the marker, and the marker should be within ±30 degrees of the robot's field of view. When these conditions are met, the robot will: 1) Move to a position where its nose is approximately 0.5 meter directly in front of the marker (this places the base-link about 1 meter away). 2) Drive forward to continue to the user-defined goal position for docking.
Note:
- The robot will not perform obstacle avoidance, so ensure its path is clear.
- The goal pose is defined relative to the robot's base-link, which is about 50 cm behind its nose.
- The allowed offset from the marker cannot exceed 1 meter.
Docking
Undocking (possibly from a marker). This capability has only one requirement: the robot must have been turned on and initialized on a known Aruco marker. Once activated, the robot will simply drive backwards at 0.5 m/s until it has moved 1 meter.
Note:
The primary purpose of this capability is to help the robot escape from situations where it has positioned itself too close to obstacles—such as walls, tables, or objects in tight spaces. This often happens during docking or when navigating crowded areas. In such cases, the robot may be unable to plan a new path because it's too close to surrounding objects. By using this undocking capability, the robot creates space around itself, allowing it to resume normal path planning from a clearer position.
Undocking

Info

Please continue to the Behaviors of the Autopilot Inference.