Skip to content

Self planned drive (not supported on the Origin One 1.0/1)

Corresponding components: Origin platform

Warning

Make sure that the robot is unplugged from its charger

Important

Make sure that the robot has a map of the environment and that it is available to the Autopilot (see autopilot-mapping) for more information)

Important

Make sure that you have ROS2 Humble installed on your computer (see ros2 installation for more information)

The robot may also be commanded without Cerebra Studio. In that case one does not define a job as a list of tasks for the robot, but instead define a list of behaviors for the robot to immediately execute that list. As an example, the "Follow path" task as specified by Cerebra Studio is decomposed by the Autopilot into a MoveAlongPath behavior with a particular path constraint. Other behaviors that are supported by the Autopilot are MoveThroughPoses (such as a goal waypoint), Wait and CoverArea.

The Autopilot keeps track of the list of behaviors that it was asked to execute. When a behavior has been executed, then it is removed from that list and the next behavior in that list is executed. And so on. From your local computer one could specify a list of behaviors as a ROS2 message and make a service call to the Autopilot to add that list to the current list of behaviors that the robot might already be executing. This list of behaviors of immediately executed one after another.

BehaviorFifo

Since we do not require Cerebra Studio we may immediately start to run the Autopilot on the robot. From earlier examples one should already know that this is done with the following steps.

  1. Open a terminal on your computer and start na ssh session with the robot, see firs-time-connecting
  2. Within the terminal, enter the docker container of the Autopilot as follows: $ avular compose enter autopilot.
  3. When indoors, ensure that the robot is close and in front of the Aruco marker. When outdoors, ensure that the robot has a stable RTK-GNSS fix and drive the robot for 5 meters manually (preferably in a straight line).
  4. Start the Autopilot in DEFAULT mode by running the following command in the docker environment of the Autopilot: ros2 launch autopilot_origin_twilight default.launch.py. It may take a minute or 2/3 until the Autopilot is successfully launched, at which you will recieve a logging statement in the terminal that is equal to: [behaviortree_executor-1] [INFO] [autopilot.behaviortree_executor.LogInfo]: System initialized].
  5. Open a new terminal and start an ssh session with the robot.
  6. Within the terminal, enter the docker container of the Autopilot as follows: $ avular compose enter autopilot.
  7. You may check whether the Aruco marker is also detected by the robot by assessing whether messages are being published on the the topic "autopilot/marker_pose". This can be done by running the following command in the terminal's ssh session: $ ros2 topic echo /autopilot/marker_pose. If messages with the correct ID of the Aruco marker are being published, then the marker was detected and you may hit 'ctrl-c' to stop checking the messages.
  8. Load the map desired into the Autopilot by running the following command ros2 service call /autopilot/information_manager/load_localization_map knowledge_base_msgs/srv/PublishLocalizationMap '{name: some_name} in the terminal's ssh session. You should receive a respons in the terminal that the map was published successfully. In case you don't know the name of the map anymore, then you may request all the maps known to the robot using the following command: ros2 service call /autopilot/information_manager/resource/map/list knowledge_base_msgs/srv/ListResource.

Debugging GUI

The robot is now fully prepared for an autonomous driving. Now you are ready to start a self-planned path, which is different from a path as planned in Cerebra Studio in the sense that you only provide the final waypoint and the robot will plan a path to go to this waypoint using the floorplan. You may command this to the robot by specifying a MoveThroughPoses behavior to the robot, possibly followed by other behaviors. While here we will show you how to define such a list of behaviors using the debugging gui that is available on the robot, the interested reader is referred to our code examples to understand the ROS2 commands behind this debugging gui.

  1. In the ssh session that was already started above, please run the command $ ros2 launch debug_gui debug_gui.launch.py. This will start the debugging gui on the embedded computer of the robot.
  2. In order to visualize the debugging gui you may open a browser and surf to <hostname>.local:8081. In the center panel of this gui you will see the camera image of the robot. use the arrows in this center image to navigate to the second image about the floorplan.
  3. On the left panel of this gui please toggle the 'move-to behavior, after which you may click waypoint on the floorplan.
  4. When clicking on the button "add moving task" you will see the list of "move-to" behaviors for each of the waypoint you clicked in the floorplan
  5. Now click the "execute" button and the robot will start executing one behavior after another. Please do not forget ro release the MANUAL control mode by clicking on the "X" button of the remote controller.

DebugGoTo

Echoing ROS2 topics and Rviz

The easiest way to echo topics is to use the ssh session of the above terminal and run the command $ ros2 topic list, which will show all the topics that the robot supports. If you want to read out the position of the robot then you may run the command $ ros2 topic echo /autopilot/estimated_pose. This will print the current position of the robot as a gaussian distribution, i.e., with a mean and a covariance. A drawback is that you read out topic using an ssh session, and therefore that these topics are not available on your local computer.

To actually open up the ROS2 network of the robot on your local computer, and thereby visualize ROS2 topics in tools, such as Rviz, or to create your own ROS2 nodes that interact with the robot, you will need to establish a Zenoh bridge between your computer and the robot. How to setup this interaction is explained the code examples of the next section.

Code examples and simulation environment

More advanced interactions with the robot is possible.

For example via the code example that can be found on avular-github/example_origin_ros

Or via the Gazebo simulation environment that can be found on avular-github/avular_origin_simulation

Info

While executing a MoveTo behavior for which the robot has not received a path that the the robot is constrained to follow it also implies that the robot is authorized to plan any path. To do so, the robot will take the floorplan, i.e., the floorplan that is already used by the robot for localization, which is further extended with actual information about occupied areas retrieved from the LiDAR data, and on that combined map a feasible path (free space) will be planned. More information about how the robot is planning a path can be found in at the autopilot-path-planning.

Info

As a result the robot is able to not only avoid small-sized, ad-hoc obstacles up to 15 cm, but also medium-sized, ad-hoc obstacles up to 50 cm. For larger, ad-hoc obstacles up to 3 meters the robot will first try to avoid the obstacle directly but after a number of attempts, that failed, it will replan a completely new path to avoid the obstacle and thereby avoid it (when the robot was able to detect the boundaries of the object in the meantime). More information about obstacle avoidance can be found in at the autopilot-obstacle avoidance.