Skip to content

Behaviors

Missions of the Origin One can be defined via so-called jobs, where a job is a list of tasks defining what should be done where. Tasks are further decomposed into behaviors to also specify how a task should be executed. Defining jobs and tasks, along with the information that the robot requires to execute them, is explained in the section on operational principles. However, as behaviors are a key concept in robotics, and since the Autopilot Inference implements 5 carefully engineered and well tested behaviors, that are also the first thing that will be observed by a user, let us present the engineered behaviors of the Origin One.

We will skip some details about preparing an operation, and directly start from a situation where the Autopilot Inference has established a list of behaviors that the robot needs to execute (one after another). Note that this list may also be set using the ROS2 API of the robot on the service set_and_execute_behaviors.

The Autopilot Inference operates on this list of behaviors are follows:

  1. Request the next behavior in the list that is to to be executed.
  2. Populate the specifics of the behavior by loading a so-called behavior-tree.
  3. Trigger the nodes of this tree and check whether they are RUNNING, SUCCESSFULL, or FAILED.
  4. In case all nodes report SUCCESSFULL, then also the behavior is successfully executed, otherwise the behavior failed.
  5. Clear the behavior from the list.
  6. Return to step 1, independent on whether the behavior succeeded ror failed, until the list is cleared

An illustrative example of a behavior tree is illustrated below. It shows a tree of nodes that are either actions, i.e., things that can be executed by the robot's capabilities, or nodes that arrange the flow of the tree, i.e., sequentially triggering childs form left to right, or triggering a fallback option (Action 3) in case the typical solution (Action 2) fails. The other sections on this page explain the different behaviors that the Autopilot Inference is able to execute on the Origin One.

IllustrativeBehavior

MoveThroughPoses

The MoveThroughPoses behavior receives a list of poses through which it plans a path that the robot is able to execute. The path is planned by first acquiring the global position of the robot and the localization map, after which a controller will ensure that the robot follows the path while avoiding obstacles. This behavior directly triggers the corresponding navigation capability of the Navigation stack. This behavior relies on map-constrained autonomous navigation, in addition to autonomous localization and obstacle avoidance (see this page).

MoveThroughPosesOverview

What should be expected
Possible faults
The robot smoothly drives a self-planned path and it slows down when arrive at its final location. It might make minor corrections, mainly in heading, to actually arrive at the destination The robot stops driving, possibly caused by a bad position estimate, for example when no marker was detected for a while, an marker from a complete different location was detected, no GNSS-RTK fix is present or it cannot position itself on the map. On either of the cases one should manually take over de robot and drive to an Aruco marker or an area with RTK-reception.
OR
The robot is not able to arrive at its destination, which often occurs when there is an obstacle on or near the final destination.
The robot uses the map to plan a path from its current position to the final destination (possibly via any intermediate poses that were defined) The robot is not able to find a path to its final destination. Possible because the recorded map does not allow it, or because the global costmap is limiting the maneuverability of the robot. Clearing the global costmap might help.
Small obstacles up to 15cm will be directly avoided. Medium obstacles up to 50 cm will trigger a local adjustment of the path. Large obstacles up to 3 meters will trigger a replanning of the entire path (once or twice). If not new path can be found the mission is aborted. The robot is not able to avoid the object, either because the object is too wide, or because the space around the object is too limited.
OR
The robot hits an object rather than avoiding it. This might be caused when the object is very light-reflective, and-or, when the sun is directly hitting the LiDAR sensor. Another possibility is that the object is wider at the bottom than at the top of the object.

MoveAlongPath

The MoeAlongPath behavior receives a path that the robot is commanded to execute. To start, the robot will plan a path, of 3 meters max, so that the robot will go to the nearest point on this path. Note that this point may be halfway already when the robot is also located halfway. Then, the planned path and commanded path are sent to a controller that will ensure that the robot follows the path while avoiding obstacles. This behavior direclty triggers the corresponding navigation capability of the Navigation stack. This behavior relies on path-constrained autonomous navigation, in addition to autonomous localization and obstacle avoidance (see this page).

MoveAlongPathOverview

What should be expected
Possible faults
The robot smoothly drives the received path and it slows down when arrive at its final location. It might make minor corrections, mainly in heading, to actually arrive at the destination The robot stops driving, possibly caused by a bad position estimate, for example when no marker was detected for a while, an marker from a complete different location was detected, no GNSS-RTK fix is present or it cannot position itself on the map. On either of the cases one should manually take over de robot and drive to an Aruco marker or an area with RTK-reception.
OR
The robot is not able to arrive at its destination, which often occurs when there is an obstacle on or near the final destination.
The robot uses its current position to compute velocities by which it will stay on the required path. The robot does not start driving, which may occur when the starting point of the path is too far away from the robot’s current position (> 2 meter).
Small obstacles up to 15cm will be directly avoided. Medium obstacles up to 50 cm will trigger some local adjustment of the path that is very limited. Large obstacles up to 3 meters will impede some trail and error after which it will abort the entire mission. The robot is not able to avoid the object, either because the object is too wide, or because the space around the object is too limited.
OR
The robot hits an object rather than avoiding it. This might be caused when the object is very light-reflective, and-or, when the sun is directly hitting the LiDAR sensor. Another possibility is that the object is wider at the bottom than at the top of the object.

CoverArea (indoor only)

The CoverArea behavior receives at least one polygon, or perimeter, that the robot is commanded to cover, which means that it should drive over every square inch of that area. Any other polygon should either be labeled as a Go-area, i.e., where the robot should stay within, or as a Nogo-area, i.e., where the robot should stay out. The robot will plan a path, either a lawnmower pattern of straight legs or a spiral pattern from the center to the area's perimeter, ensuring that the robot will cover the area. In addition, also external contours can be planned to ensure coverage near the boundary. The path is planned by first acquiring the robot's position and, if requested and available, the localization map. Then, the planned path is sent to a controller that will ensure that the robot follows the path while avoiding or stop for obstacles. This behavior direclty triggers the corresponding navigation capability of the Navigation stack. This behavior relies on map-constrained autonomous navigation, in addition to autonomous localization and obstacle avoidance (see this page).

CoverAreaOverview

What should be expected
Possible faults
We will assume that the FollowPathStrictly was selected, as the FollowPathLoosely will not be able to make U-turns, and that the robot start within the area that it needs to cover. The robot plans a paths like a lawnmower pattern, in which the legs of the pattern are 40cm apart to create complete coverage of the area. Depending on whether it is able to use the localization map and no-go areas the patterns will be planned such that the robot will avoid known occupied and no-go areas. Once planned, the robot will drive to one of the legs and then drives slowly and steadily along these legs. At the end of each leg it will make an abrupt U-turn to start the next leg. When all legs are done, then the robot will drive two more paths parallel to the contour of the area that was to the covered. The robot stops driving, that could be caused by an ad-hoc object that was not present on one of the maps, or when the area that needs to be covered is located in a tight space. For this behavior the robot will be sensitive to unknown obstacles in or nearby the area.
OR
The robot does not start moving and aborts the tasks. Which is likely caused when the robot is not able to plan a path, for example when no proper polygon was defined for the area that needs to be covered, or when the nogo-area exceeds the go-area, or when the robot does not start at a position that is within the polygon of the area it needs to cover.
OR
The robot skips much of the area. This can be caused when the area is very small, e.g., less than 1x1 meters, as the robot will try to skip the first 3 meters of a planned path which it may easily do when the path is a lawnmower pattern.

Wait

The Wait behavior receives a time, in seconds, during which the robot will stand idle.

What should be expected
Possible faults
The robot will wait at its current position for the amount of seconds that were defined in the behavior. unknown.

Speak

The Speak behavior receives a sentence, as a string, and publishes that string to an internal terminal for logging purposes and it also publishes that string on the ROS2 topic /autopilot/test_to_speak. This behavior directly triggers the InformUser capability of the Interaction stack. We are planning to release a development examples by which this ROS2 topic is turned into actual spoken words from a speaker. But feel free to create such a device yourself by making use of the ethernet prot of the robot.

SpeakOverview