Skip to content

Behaviors

The Origin One is designed to operate on a list of behaviors that are to be executed. A user prepares an operation by setting a job and its tasks (what should be done), which are then translated into this list of behaviors (how it should be done). The Autopilot Inference subsequently executes these behaviors, one after another, to complete the job.

The Autopilot Inference implements six carefully engineered and well-tested behaviors that are key to the successful execution of a job. In this section, we detail each of these behaviors, which are the first aspect observed by a user when executing a job. Defining jobs and tasks, along with the information required for execution, is detailed in the section on operational principles.

We will skip some details about preparing an operation and directly start from a situation where the Autopilot Inference has established a list of behaviors that the robot needs to execute (one after another). Note that this list may also be set using the ROS2 API of the robot on the service set_and_execute_behaviors.

The Autopilot Inference operates on this list of behaviors as follows:

  1. Request the next behavior in the list that is to be executed.
  2. Populate the specifics of the behavior by loading a so-called "behavior tree".
  3. Trigger the nodes of this tree and check whether they are RUNNING, SUCCESSFUL, or FAILED.
  4. In case all nodes report SUCCESSFUL, then also the behavior is successfully executed, otherwise the behavior failed.
  5. Clear the behavior from the list.
  6. Return to step 1, until the list is cleared.

Info

If a behavior fails, the pre-conditions for the next behavior might not be satisfied, leading to further failures.

The Autopilot Inference operates by executing a list of behaviors, which are sequences of actions and control flow nodes. Each behavior is defined with a specific behavior tree, which is a hierarchical structure of nodes that are either actions, i.e., leaves of the tree that can be executed by the robot's capabilities, or control flow nodes that arrange the flow of the tree. These control flow nodes are used to trigger children from left to right, or to trigger a fallback option (Action 3) in case the typical solution (Action 2) fails. The Autopilot Inference uses behavior trees to execute the behaviors of the Origin One, as explained in the following sections.

IllustrativeBehavior

MoveThroughPoses

The MoveThroughPoses behavior receives a list of poses through which it plans a path that the robot is able to execute. The path is planned by first acquiring the global position of the robot and the localization map, after which a controller will ensure that the robot follows the path while avoiding obstacles. This behavior directly triggers the corresponding navigation functionality of the autopilot and relies on the following capabilities:

  • Map-constrained autonomous navigation
  • Autonomous localization
  • Obstacle avoidance

The details of these capabilities are presented on this page.

MoveThroughPosesOverview

What should be expected
Possible faults
The robot drives a self-planned path and slows down when arriving at its final location, making minor corrections in (mainly) heading to arrive at the destination. The robot stops driving, possibly caused by a bad position estimate, or it cannot position itself on the map. This can occur when:
- No marker was detected for a while.
- A marker from a completely different location was detected.
- No RTK-GNSS fix is present.
In these cases, one should manually take over the robot and drive to an Aruco marker or an area with RTK-reception.

The robot is not able to arrive at its destination, which often occurs when there is an obstacle on or near the final destination.
The robot plans a path from its current position to the final destination (possibly via any intermediate poses that were defined) by using the map. The robot is not able to find a path to its final destination. This can occur when:
- The recorded map does not allow it.
- The global costmap is limiting the maneuverability of the robot. Clearing the global costmap might help.
- Small obstacles (up to 15 cm high) will be directly avoided.
- Medium obstacles (up to 50 cm high) will trigger a local adjustment to the path.
- Large obstacles (up to 3 meters wide) will trigger one or two full path replanning attempts.
- Hallways and doorways wider than 90 cm can be passed safely. Passages narrower than 80 cm may be too tight and could be infeasible for the robot.
If no viable path can be found, the mission will be aborted.
There are situations where the robot may fail to avoid an obstacle:
- The object may be too wide, or the surrounding space may be too limited to allow safe maneuvering.
- The robot may collide with an object instead of avoiding it. This can happen if: 1) the object has highly reflective surfaces, 2) the LiDAR sensor is affected by direct sunlight, 3) the object is wider at its base than its top. Since the robot starts detecting obstacles a few centimeters above the ground, such objects may be partially or fully missed.

MoveAlongPath

The MoveAlongPath behavior receives a path that the robot is commanded to execute. To start, the robot searches along the given path to locate the nearest point on the path that is closest to the robot's current position. This point may be halfway along the path if the robot is already located halfway. Then, the given path is sent to a controller that will ensure that the robot follows the path while avoiding obstacles.

This behavior directly relies on the following capabilities of the autopilot:

  • Path-constrained autonomous navigation
  • Autonomous localization
  • Obstacle avoidance

The details of these capabilities are presented on this page.

MoveAlongPathOverview

What should be expected
Possible faults
The robot will smoothly drive the path and slow down when arriving at the final location, making minor corrections (mainly in heading) to arrive at the destination. The robot may stop driving due to a bad position estimate, which can occur when:
- no marker was detected for a while,
- a marker from a different location was detected, or
- no RTK-GNSS fix is present.
In these cases, manually take over the robot and drive to an Aruco marker or an area with RTK-reception.

The robot may not be able to arrive at its destination if there is an obstacle on or near the final destination.
The robot uses its current position to compute velocities by which it will stay on the path. The robot may not start driving if the starting point of the path is too far away from the robot's current position (> 2/3 meter).
Small obstacles up to 15cm will be directly avoided. Medium obstacles up to 50 cm will trigger a local adjustment of the path. Large obstacles up to 3 meters will trigger some trial and error after which it will abort the entire task (and possibly the job as the preconditions for the next job may not be met). Hallways and doorways exceeding 90 cm can be passed, while passages up to 80 cm may turn out to be infeasible for the robot. The robot may not be able to avoid an object if:
- the object is too wide,
- the space around the object is too limited,
- the object is highly reflective, or
- the LiDAR sensor is affected by direct sunlight.
In addition, the robot may not detect objects that are wider at the bottom than at the top since it only starts detecting objects several centimeters above the ground plane.

CoverArea

The CoverArea behavior commands the robot to cover an area by driving over every square inch of it. This area is defined by one or more polygons, which can be labeled as either Go-areas (where the robot should stay within) or Nogo-areas (where the robot should stay out). The robot will plan a path, either a lawnmower pattern of straight legs or a spiral pattern from the center to the perimeter of the area, to ensure coverage. External contours can also be planned to ensure coverage near the boundary. The path is planned by first acquiring the robot's position and, if requested and available, the localization map. Then, the planned path is sent to a controller that will ensure that the robot follows the path while avoiding or stopping for obstacles.

The CoverArea behavior relies on the following capabilities of the autopilot:

  • Map-constrained autonomous navigation
  • Autonomous localization
  • Obstacle avoidance

The details of these capabilities are presented on this page.

CoverAreaOverview

Note

We will assume that either FollowPathRoughly (with some avoidance) or FollowPathStrictly (no avoidance) was selected, as the FollowPathLoosely will not be able to make proper U-turns. Also, the robot must start within the area that it needs to cover.

What should be expected
Possible faults
The robot plans a path like a lawnmower pattern (or contours), in which the legs of the pattern (or between contours) are 40cm apart. Depending on whether it is able to use the localization map and no-go areas, the patterns will be planned such that the robot shall avoid known occupied and no-go areas. Once planned, the robot will drive to one of the legs and then

- Either drive slowly and steadily along these legs, when FollowPathLoosely is selected, and make an abrupt U-turn at the end of each leg to start the next leg.

- Or drive quicker with wide turns, when FollowPathRoughly is selected, implying that the robot needs to make a forward driving correction to drive onto the next leg of the pattern.

When all legs are done, then the robot will drive two more paths parallel to the contour of the area that was to be covered.
The robot stops driving, that could be caused by an ad-hoc object that was not present on one of the maps, or when the area that needs to be covered is located in a tight space, or when the robot was not able to drive at least 10 cm in 10 seconds. Please note that for this behavior the robot will be sensitive to unknown obstacles in or nearby the area.

The robot does not start moving and aborts the task. Which is likely caused when the robot is not able to plan a path, for example when no proper polygon was defined for the area that needs to be covered, or when the nogo-area exceeds the go-area, or when the robot does not start at a position that is within the polygon of the area it needs to cover.

The robot skips much of the area. This can be caused when the area is very small, e.g., less than 1x1 meters, as the robot will try to skip the first 3 meters of a planned path which it may easily do when the path is a lawnmower pattern.

MoveInDock and MoveOutDock

The MoveInDock behavior receives the ID of an Aruco marker and an offset distance as input. It docks in front of the marker at the specified distance. The MoveOutDock behavior does not require any input. It drives slowly and backwards for approximately 1 meter. Both behaviors directly trigger the corresponding navigation functionality of the autopilot, as presented on this page.

MoveOnDockOverview

What should be expected (MoveInDock)
Possible faults
The robot detects the Aruco marker, drives to an intermediate goal pose, turns to align itself straight in front of the marker, and then drives to its final goal position that matches the specified offset distance from the marker. The robot does not start driving. This may be caused by the Aruco marker not being detected. To ensure detection, the robot must start with its camera pointed at the marker (max viewing angle of 30 degrees) and a distance up to 3/4 meters. There should be no obstacles between the starting and docking positions.

The robot hits an object while driving. This is because obstacle avoidance is turned off for this behavior, so please keep the docking area free of obstacles.

The robot drives further than the given offset. This may be caused by an incorrect offset. The offset is defined as the distance between the marker and the center of the robot, which is 50 cm behind the nose of the robot.
What should be expected (moveOutDock)
Possible faults
The robot drives slowly backwards for approximately 1 meter. The robot does not stop driving or drives too far. This may occur when the robot is not able to keep track of its position using the IMU and its wheel encoders, such as in a situation of slip. If the robot drives too far, hit the emergency brake to stop it, as it will not have any notion of obstacles behind it.

Wait

The Wait behavior receives a time, in seconds, as input and makes the robot stand idle for that amount of time.

What should be expected
Possible faults
The robot will stand still at its current position for the amount of seconds that were defined in the behavior. None known.

Speak

The Speak behavior takes a sentence as input and publishes it to two locations:

  1. An internal log for debugging purposes.
  2. The ROS2 topic /autopilot/text_to_speak, which can be used to trigger a text-to-speech module.

We plan to release a development example for this topic in the future, but you are free to create your own solution using the robot's ethernet port.

SpeakOverview

Info

Please continue to the high level design of the Autopilot Inference.