Skip to content

Operational principles

The operational principles of the robot are essential to understand how the robot reacts to the situation in which it is executing tasks. This page is divided into two main sections: preparing for an operation and executing an operation. The first section discusses how a user may prepare the robot for an operation, i.e., during deployment-time. The second section discusses how the robot transforms this preparation into the actual execution of the operation as a list of behaviors. Additionally, the page on behaviors describes what a user might expect from the behaviors and autonomous capabilities of the robot when it is executing its operation, i.e., in which situations it should be able to go to a next waypoint or avoid obstacles.

Preparing for an operation

The cognition functionality is responsible for capturing all relevant information that is required for the robot to conduct the operation. This information is structured as a set of concepts, which are presented in the illustration below as well as in the definition after that. The information concepts include jobs, tasks, behaviors, maps, paths, poses, artifacts and the coordinate system.

AutopilotPreparation

This will be information about:

  • Jobs: a job is defined as a list of tasks.
  • Tasks: tasks specify the what and where to a robot. A task can be a:
  • Primitive task: a task that can be directly executed by the robot using a behavior, such as Goto -some- waypoint, Follow -some- Path, Dock at -some- dock, or Undock.
  • Compound task: a task that can be broken down into multiple primitive tasks, such as a job (see this section for more details).
  • Behaviors: behaviors specify the how to execute a task. There can be multiple ways in which a robot may execute a task, such as a Goto waypoint task can be executed via the behaviors MoveThroughPoses or MoveAlongPath, depending on whether it is constrained to a specific path. But there may also be a single manner of execution, such as a Dock task is executed via the one behavior MoveInDock, or the Undock task is executed via the behavior MoveOutDock.
  • Constraints: constraints are limitations on the autonomy of the robot that limit the decision space of the robot as to how a task is to be executed. Examples of constraints include a path constraint on a task, which specifies that the robot should follow a specific path, or go- and no-go areas defined via polygons.
  • Policies (not depicted): policies are preferences on the autonomy of the robot, making trade-offs in the decision space of the robot as to how a task is to be executed. An example of policy is the maximum distance to the path for a Follow Path task.
  • Maps: a map is an occupancy grid of the environment in which the robot operates. The boundary of the map is defined by a polygon and it is used for localization. The map is optional, and if it is not available, then the robot should have RTK-GNSS available before it may navigate autonomously.
  • Paths: a path is a specification of how a robot should move from a starting waypoint to a final waypoint, or destination.
  • Coverage Areas: a coverage area is a polygon that defines an area that needs to be covered. Coverage areas serve as the goal of a Cover Area task.
  • Artifacts: an artifact is an object in the real world that can be visualized, such as an Aruco marker or a Waypoint. The artifact can serve as the goal of a task, such as the object to go to, while the behavior for executing that task needs the pose of the artifact directly rather than the artifact itself.
  • Coordinate System: the coordinate system of the robot is defined by a local-tangent-plane, i.e., a Cartesian X-Y-Z definition of space also referred to as the "global frame". The robot localizes itself within this global frame, while it is also used for planning paths and navigation. To relate indoor position measurements (from an Aruco marker or a map) to outdoor RTK-GNSS measurements, the mathematical origin of the global frame requires known latitude and longitude values, so that any RTK-GNSS sensor measurement can be transformed into an (X,Y) position measurement in the global frame that is used by the robot. The Coordinate System can be set by a user to define these values.

Preparing navigation data

The coordinate system is a key concept for the robot. It defines the latitude and longitude (and altitude, although that one is not used for Avular's ground robot) of the Cartesian space in which the robot navigates itself. All information that is relevant to the robot for navigation is at some point characterized by a pose.

A pose is a list of values combining linear position and quaternion. It is used to describe the position and orientation of some piece of information, i.e., a data-concept, that is relevant to the robot, such as:

  • The origin of a map
  • A polygon (list of poses)
  • A path (list of poses)
  • An artifact (Aruco or waypoint)

When creating a pose, i.e., saving it into the database, then this pose will be linked to the coordinate system that is being used by the robot at the moment of saving it. This allows the cognition functionality to convert such a previously stored pose from a pose in its former coordinate system into a pose that would match the coordinate system that is currently being used by the robot, even when the values of the coordinate system have changed. Therefore, when the user is updating the coordinate system, it does not need to update all poses related to maps, polygons, paths, Aruco markers and waypoints that have already been saved by the robot.

pose = [x, y, z, qx, qy, qz, qw]

Maps and paths are data-concepts of the cognition functionality used for localization and path planning.

Maps are represented as an occupancy grid, see the illustration below, in which a grid-cell is either occupied, free, or unknown. The origin of the map has some pose with respect to the coordinate system, i.e., the origin of the global frame (which in the illustration below is at x = 10.0 and y = 3.2 meter). For the Autopilot Inference it should always hold that the orientation of the map (X-Y-axes) is the same as orientation as the global frame (X-Y-axes).

Paths are represented as a list of poses without any time indication as to when the robot needs to be at that pose. The pose of a path is further defined with respect to some coordinate system.

Map Path
ExampleMap A path in 2D from the origin (0, 0) towards (3, 0) becomes
path = {
   poses: [
     [1.0 0.0 0.0 0.0 0.0 0.0 1.0]
     [2.0 0.0 0.0 0.0 0.0 0.0 1.0]
     [3.0 0.0 0.0 0.0 0.0 0.0 1.0] ]
   coordinate_system = {
     latitude: 51.4534,
     longitude: 5.4488,
     altitude: 61.14} }

Artifacts are real-world objects that the cognition functionality knows about. They are represented by their pose in the real-world, and optionally, additional information such as a marker-id in the case of Aruco markers. At this moment, the cognition functionality only supports Aruco markers and Waypoints. The pose of an artifact is defined within some coordinate system. For convenience, we have already included Aruco markers 0, 1 and 2 in the cognition functionality upon delivery, as these markers have also physically been delivered with the robot. The pose of each marker is as follows:

  • Aruco marker 0 has position x = 0.0, y = 0.0, z= 0.3,
  • Aruco marker 1 has position x = 1.0, y = 0.0, z = 0.3,
  • Aruco marker 2 has position x = -1.0, y = 0.0, z = 0.3.
Artifact
Aruco marker Waypoint
ExampleAruco } aruco = {
   pose: [0.0 0.0 0.0 0.7071 0.0 0.0 0.7071];
   coordinate_system = {
     latitude: 51.4534,
     longitude: 5.4488,
     altitude: 61.14}
ExampleMapPosition }
waypoint = {
   pose: [10.0 3.0 0.0 0.0 0.0 0.0 0.0];
   coordinate_system = {
     latitude: 51.4534,
     longitude: 5.4488,
     altitude: 61.14}

Info

Note that the Aruco marker has a non-zero orientation. Specifically, it is rotated by 90 degrees around the X-axis to align with the image processing convention, where the Z-axis points out of the marker. This is because the global frame is defined with the X-axis pointing to the right and the Z-axis pointing upwards. Hence, the quaternion values of the Aruco marker in the global frame are qx = 0.7071, qy = 0.0, qz = 0.0, and qw = 0.7071.

Registration of new markers is supported by the Autopilot Inference, for which the interested reader is referred to the code examples. To register new markers, or update the position of old markers, you will need to define the global frame. This is a virtual point in the real world that you may easily recognize. Note that, upon delivery, the global frame is defined as the position of Aruco marker 0. When you want to use multiple Aruco markers in the same operational environment of the robot, then you will need to update the position of these markers as well. To do so, you must measure their position in the X and Y axis of the global frame and use the code examples to register that (new) position of the Aruco marker. Note that, as is illustrated in the image below, you may also adjust the position of Aruco marker 0, which will imply that some other real-world position becomes the global origin.

Info

When setting the global origin, choose a location that is easily recognizable and will remain consistent over time. This could be a specific point on a map, a physical landmark, or any other fixed reference point. It is important to choose a location that will not change over time, as this will ensure that the robot's navigation system remains accurate and reliable. Additionally, make sure to document the location of the global origin, as this will be important for future reference and troubleshooting.

ArucoPositions

In case your robot will drive both indoors and outdoors, i.e., on a map and on RTK-GNSS supported areas, then the origin of the global frame should be known in the gps system of the robot, i.e., the coordinate system should have the proper latitude and longitude. The code examples show how you may update these values of the origin of the global frame. After doing so, the Aruco markers of the above illustration shall be saved to the knowledge base of the robot as follows:

  • Aruco marker 0:
      aruco = {
        pose = [-3.0, 1.0, 0.0, 0.7071, 0.0, 0.0, 0.7071];
        coordinate_system = {latitude: 51.4534, longitude: 5.4488, altitude: 61.14} }
  • Aruco marker 1:
      aruco = {
        pose = [-6.0, -2.0, 0.0, 0.7071, 0.0, 0.0, 0.7071];
        coordinate_system = {latitude: 51.4534, longitude: 5.4488, altitude: 61.14} }
  • Aruco marker 2:
      aruco = {
        pose = [4.0, -2.0, 0.0, 0.7071, 0.0, 0.0, 0.7071];
        coordinate_system = {latitude: 51.4534, longitude: 5.4488, altitude: 61.14} }

Once all information is in place for the robot to properly navigate, you may continue with defining a job.

Preparing a job

A user may define a new job using Avular's GUI called Cerebra Studio, or directly create a list of behaviors for immediate execution via Avular's ROS2 API. For now, our explanation assumes Cerebra Studio. A job is a sequence of tasks with constraints and policies on how to execute each specific task. The tasks that are currently supported for a job are GotoWaypoint, FollowPath, CoverArea, Dock, Undock and Wait.

The Autopilot uses Hierarchical Task Planning to define tasks. In this context, tasks are categorized as either compound tasks or primitive tasks. A compound task can be broken down into one or more primitive tasks, and a primitive task can be directly executed by the robot using one of its available behaviors. A job is a predefined list of tasks that should be executed in a specific order, while a compound task can be decomposed into primitive tasks based on the current situation. This means that the same compound task may result in different primitive tasks being planned for execution depending on the situation. Similarly, the same primitive task may have multiple alternative behaviors that can be executed, and the system will decide which one is best depending on the situation. However, this situation-dependent task decomposition has not yet been implemented. Some examples of compound and primitive tasks are listed below. Note that the Autopilot has some behaviors implemented that are not yet defined as a task in Cerebra Studio.

Compound task
Primitive task
WarnPeople(goal: artifact) FollowPath(goal: artifact) +
Speak(sentence="please move away from the ")
Primitive task
Behavior
GotoWaypoint(goal: artifact, constraint: path) MoveAlongPath(goal: empty, constraint: path), if path exists
MoveThroughPoses(goal: artifact-pose, constraint: empty), else
FollowPath(constraint: path) MoveAlongPath(goal: empty, constraint: path)
CoverArea(goal: coverage-area, constraint: go-area, nogo-area) CoverArea(coverage-area: some-polygon, go-area: some-polygon, nogo-area: some-polygon)
Dock(policy: dock_marker_id, object_distance) MoveIndock(policy: dock_marker_id, object_distance)
Undock() MoveOutDock()
Speak(policy: sentence) Speak(policy: sentence)
Wait(constraint: time) Wait(constraint: time)

The example below illustrates how a compound task (blue) is broken down into its primitive tasks (blue), and how each of these primitive tasks is then associated with a specific behavior (green). In this example, the compound task is to warn people at the main-square, which requires the robot to follow a specific path, i.e., the NorthRoute. This compound task is first decomposed into a path-constrained FollowPath task (blue), which is then further broken down into the MoveAlongPath behavior (green). The compound task is also decomposed into a Speak task (blue), which is then associated with the Speak behavior (green).

TaskDecomposition

Initialization

When the robot is turned on, the Autopilot Inference is launched and starts its initialization phase. During this phase, the ordination functionality requests all behaviors that are supported by the other stacks, i.e., which they are able to execute. The behaviors are stored in a so-called behavior-tree, which is defined in an xml-structure. A user-friendly interface called Groot is available to design the behavior-tree visually as a tree structure and then convert it into the xml-structure. Each high-level functionality in the Autopilot Inference is responsible for managing which behaviors it is able to execute. For example, the navigation functionality will maintain its behavior-tree xml for the execution of the MoveInDock behavior.

SequenceBehaviorTree

Info

Note that the ordination functionality is responsible for loading the behaviors. However, the mapping from tasks to behaviors is the responsibility of the cognition functionality. The cognition functionality has a predefined mapping, which assumes that all predefined behaviors known to the cognition functionality are also available in the ordination functionality.

Executing the operation

The cognition functionality and the ordination functionality together ensure that the robot will execute the job that is defined by the user. This execution is triggered by calling a service of the cognition functionality to start execution the job in which the unique identifier of the job is embedded in the call (/autopilot/information_manager/start_job). Internally, the cognition functionality will retrieve the list of tasks defined for that job from the knowledge base of the robot. The cognition functionality will then further decompose this list of tasks into a list of behaviors that are to be executed sequentially.

The ordination functionality continuously polls the next behavior of this behavior list and executes that first behavior unless the list is empty. It returns a successful execution or a failed execution, upon which the behavior is removed from the list and the next behavior is polled for execution. Specific details on what the execution of a behavior entails will be presented in the ordination functionality.

Info

If the list of behaviors is empty, the ordination functionality will poll an empty list. In this case, it will publish a sentence on the topic for informing the user that the robot is "waiting for next behaviors". This ensures that the user is notified that there are no new behaviors to be executed and that the robot is waiting for further instructions.

Info

When using the ROS2-API you may also inject behaviors directly into the list of behaviors using a specific service call. More details on how to inject behaviors after you are connected to the ROS2 network of the robot can be found on the code examples.

BehaviorExecution