Indoor operations (RTK-GNSS denied areas)
Corresponding components: Autopilot Inference, Cerebra Studio (Mission Planner)
Info
Only applicable when your Origin One was acquired, or updated, with the autonomous capabilities of Avular's Autopilot Inference and a 3D LiDAR sensor on top.
Avular has developed a software stack, called Autopilot Inference, which provides autonomous capabilities to the Origin One (mainly for navigation and tasking). These capabilities allow a user to command a Path Following task, a GoTo task, a Wait task and a Cover Area task using our mission planner that is part of Cerebra Studio, which will be discussed here.
Warning
Make sure that the robot is unplugged from its charger
Info
Make sure that Cerebra Studio is installed on your computer (see installation for more information)
Job preparation
After the robot completely finished the starting-up procedure, turn on the remote controller, drive it in front of the marker and ensure that the computer on which you want to open Cerebra Studio is able to access the robot (either by connecting to the WiFi access point of the robot, or by connecting to the same network as that the robot is connected to). Before you are able to plan a mission, or as we call it a "job", you should first follow some preparations.
Preparation |
|
---|---|
Ensure that the robot will load the correct localization map. In case the localization map is already linked to the Aruco marker, see create a map, then the map is automatically loaded once the robot is positioned right in front of this marker (in between 0.5 and 1 meter). In case there was no localization map linked to the Aruco marker, or you want the robot to use a different localization map than the one linked, then you may load a specific map to the robot using a ROS2 service call. See this page for more information on how to setup a ROS2 connection with the robot. | |
Open Cerebra Studio and use the drop-down menu in the top yellow bar to connect to your robot. Once connected, open the Mission Planner application in Cerebra Studio. | |
Cerebra Studio will ask you to open a "location", which in this case involves a new indoor location (unless you have already created a location in Cerebra Studio for the localization map that the robot will be loading). Therefore, connect to the robot, select that you want to "add location", toggle the "Local Map" in the pop-up menu of Cerebra Studio, which will open a list of localization maps, and then select the map that is stated as 'active' (meaning that it is the localization map that the robot is currently using). After a few moments this location is added to Cerebra's list of locations and you may select it. | |
Next, Cerebra Studio will ask to select a "job" for the robot. You may select an existing job, but if there are no jobs or you want to create a new one you may add a new job by clicking on the "create new job" button. You only need to provide a name for you job, after which it will be added to Cerebra's list of jobs, that you may then select. Once selected, Cerebra Studio will visualize the localization map of your operational environment along with the current position of the robot in that map | |
During the execution of a job, which will be discussed later, one might keep track on other information that you may find in the dashboard of Cerebra Studio's Mission Planner. For one, in the top-left corner of Cerebra Studio, you will notice four green, round buttons. The first button will open the library, in which one may find all jobs, paths, polygons and waypoints known to the mission planner, while the two adjacent buttons will save them either on your computer or on the robot. The last button of this group will stop the job that is currently being executed (after the current task within that job is finished). Finally, in the top-right corner, there are two round indicators showing the current status of the robot. The first one of these two is the execution status indicating, for example, that it is "waiting for new behaviors", "moving", or "switching to map supported navigation". The second of these two shows how accurately the robot is able to estimate its position, which needs to indicate "high robot pose accuracy" for any autonomous operations. More details on these status indicators is presented in a later section on Job execution |
This completes the preparation and you may now continue to defining your first job.
Job definition
Creating a new job |
|
---|---|
During the preparation you will already have selected a job, or created a new one. Here, we will assume you have created a new job. A job is an ordered list of tasks. In this section we will show you how you can add tasks to a job. To the right of this text you see a section of the top-right of Cerebra Studio that is of importance for defining this list of tasks. It shows the two status indicators, in this case that the pose is inside a map (first indicator) and that the pose is accurate (second indicator). Beneath these indicators there is the name of the job that you defined, which for this manual is called "job_for_manual". Just two the right of this name you see a "play" button-icon, by which you may start the execution of the job by the robot, and you see an trash-bin icon-button, by which you can remove the job (for example to select another one). Underneath this field you see a button "Add Task", which you may use to add a new task. When you have added a new tasks, which will be explained in the section below, you will see this task being added to this ordered list. When executing the job, each task of this list is executed one by one, from top to bottom. |
Goto waypoint
Goto waypoint is a task at which the robot will plan its own path across the localization map to the set waypoint, and then navigates along that path while avoiding obstacles.
Please note that, if there is no map, or if the starting point of the robot or the destination waypoint is not on the map, or if there is no free path found on the localization map, then the robot will is not able to plan a path and thus will abort the task.
To define a Goto waypoint task, you first click the button "Add Task" located underneath the job-field where the name of your job is displayed. This will open a pop-up menu, as is illustrated in the table below on the left side. Within this menu, go to the field "Type" and select "Goto waypoint" from the drop-down list. The input fields of the pop-up menu will now change so that the other fields match the task that was selected. Continue to the next field of this pop-up menu, which is the "Waypoint" itself. You may select a waypoint that you have already saved from previous operations using the drop-down list, or you may create a new waypoint by clicking on the button "New" located to the right of the drop-down menu. For now, you only need to give the new waypoint a name, as selecting the location on the waypoint on the map comes later. Next, you may specify other policy parameters, such as the maximum speed in [m/s] at which the task should be executed, which as a limit of 1.5, and the distance to the path that the robot is allowed to steer away from what it had planned, for example to avoid obstacles. If you don't specify their values, then the default ones will be used.
Now you have covered all input fields of the Goto waypoint task, and thus you should click the button "Add" on the bottom left of the pop-up menu. The pop-up menu will leave and you will see you newly added Goto waypoint task in the task-list of the job (top-right corner of the map). As a final step you should select a location of the waypoint itself. Simply click on the map with your mouse and the waypoint will be dropped at that location (see the illustration on the right below). You may change the location of the waypoint by dragging and dropping it with your mouse. Also, you can define the orientation of the robot at the waypoint by using the orange dot attached to the waypoint.
Definition of policy parameters |
Selecting the waypoint on the map |
---|---|
If you then execute the task, by clicking on the "play" button directly on the right of the job's name, then the robot will execute this Goto waypoint, and you Cerebra Studio environment will display something that is similar to the video below. More on Job execution and what you may expect from that is presented later on this page.
This task is useful when you have a localization map of the environment and don't care too much on how the robot is actually navigating from its starting position to its destination.
This task is not (always) useful when the destination is close to the starting position, i.e., less than 2-3 meters, as the many options of the robot may render a path at which the robot is "dancing" to its destination. In that case it is better to define a path and command the next task of "Follow Path".
Follow path
Follow path is a task at which the robot will follow a specific path set across, or perhaps off, the localization map and then navigates along that path while avoiding obstacles. The robot will try to track this path to the best of its capabilities (localization accuracy, obstacle avoidance), which typically results in tracking the path up to 25 cm when clear of obstacles. Note that if a path goes off the localization map, that the robot is limited to odometry and Aruco markers for keeping track of its position, at which odometry stays accurate for about 10-20 meters.
Please note that:
- The robot should be within 2-3 meters from the starting position of a path, otherwise the robot is too far from the path and will abort the task.
- The robot will search for the furthest point on the map that is within this 2-3 meters, so if the robot is located somewhere along that path already, then it will follow the path from its current position onwards.
- The path may contain loops, although the path-length between the crossing-points cannot be smaller than 4 meters.
- In case the path is a loop, then the robot should be closer to the path's starting point than to the path's final point, as otherwise the robot will decide that it is already at the final point of the path and report a success without actually driving the path.
To define a Follow path task, you first click the button "Add Task" located underneath the job-field where the name of your job is displayed. This will open a pop-up menu, as is illustrated in the table below on the left side. Within this menu, go to the field "Type" and select "Follow path" from the drop-down list. The input fields of the pop-up menu will now change so that the other fields match the task that was selected. Continue to the next field of this pop-up menu, which is the "Path" itself. You may select a path that you have already saved from previous operations using the drop-down list, or you may create a new path by clicking on the button "New" located to the right of the drop-down menu. For now, you only need to give the new path a name, as selecting the points that define the path on the map comes later. Next, you may specify other policy parameters, such as the option to toggle reverse at which the robot will start driving from the path's destination and finish at the starting point of the path, the maximum speed in [m/s] at which the task should be executed, which has a limit of 1.5, and the distance to the path in [meters] that the robot is allowed to slightly steer away from what it had planned, for example to avoid obstacles (our robot will always try to stay within 1 meter from the path, unless you specify smaller values). If you don't specify their values, then the default ones will be used.
Now you have covered all input fields of the Follow path task, and thus you should click the button "Add" on the bottom left of the pop-up menu. The pop-up menu will leave and you will see you newly added Follow path task in the task-list of the job (top-right corner of the map). As a final step you should select the position of the points of the path itself. Simply click on the map with your mouse and a point on the path will be dropped at that location (see the illustration on the right below). You may change the location of points by dragging and dropping it with your mouse. The path will appear as you click a next point and a next, up until you are happy with the path. The starting point of the path is indicated with a green point, while its destination is in red.
Definition of policy parameters |
Selecting the path on the map |
---|---|
If you then execute the task, by clicking on the "play" button directly on the right of the job's name, then the robot will execute this Follow path, and you Cerebra Studio environment will display something that is similar to the video below. Note that if you have selected the reverse option that your robot should be close to its destination. More on Job execution and what you may expect from that is presented later on this page.
This task is useful when you either want the robot to specifically follow a predefined path, or when you want the robot to drive to some location that is off the localization map, either permanently or for driving backwards on the map afterwards.
This task is not (always) useful when there are many medium- and large-sized obstacles in the operation area, since the robot will not be allowed to avoid those ad-hoc obstacles and will therefore abort such tasks.
Wait
Wait is a task at which the robot will wait for a set amount of seconds and stand idle at the position when this task was started.
To define a Wait task, you first click the button "Add Task" located underneath the job-field where the name of your job is displayed. This will open a pop-up menu, as is illustrated below. Within this menu, go to the field "Type" and select "Wait" from the drop-down list. The input fields of the pop-up menu will now change so that the other fields match the task that was selected. The only input field of a Wait task is the time in seconds that the robot will wait. You may then click the button "Add" on the bottom left of the pop-up menu. The pop-up menu will leave and you will see you newly added Wait task in the task-list of the job (top-right corner of the map).
This task is useful when driving indoors and you are demonstrating that the robot is able to go to places, as it will wait for while and not immediately continue to the next task. It may also be useful when you are developing you own application on top of the robot and you need the robot to wait for a specific amount of time, for example to take a snapshot. Off course not setting a new task will also result in a robot that is idle at the position that the last task of a job was finished (or aborted).
Cover area
Cover area task is a task at which the robot will plan a coverage-path across an area, so that the area will be completely covered by the robot while stopping in front of ad-hoc obstacles. If ad-hoc obstacles are not removed the robot will abort the task and continue to the next task, although it may be able to avoid small obstacles.
Please note that:
- Before starting the Cover area task the robot should be positioned within the perimeter itself, otherwise it will not be able to plan a path. You can use a Goto waypoint or Follow path task to get this robot within the perimeter before starting the Cover area task.
- As the robot will make turns on the contour of the perimeter, and may also runs outer contours around the perimeter, the Cover area task requires to have a minimum free space of 1 meter outside the perimeter. Otherwise it may get stuck and abort the task. So please be aware of adding this task in confined spaces.
- The robot plans a path to cover the perimeter, and then it will search for the furthest point on the map that is within 2-3 meters to start the execution. This means that, if your perimeter is very small, such as 1x1 meter, that the robot may already be closest to a point on the path that it near the end of the path. So then it will start at this point on the path and it may seem that the robot skips a large portion of the area.
To define a Cover area task, you first click the button "Add Task" located underneath the job-field where the name of your job is displayed. This will open a pop-up menu, as is illustrated in the table below on the left side. Within this menu, go to the field "Type" and select "Cover area" from the drop-down list. The input fields of the pop-up menu will now change so that the other fields match the task that was selected. Continue to the next field of this pop-up menu, which is the "Perimeter" that defines the polygon of the area the robot should cover. You may select a perimeter (or polygon) that you have already saved from previous operations using the drop-down list, or you may create a new one by clicking on the button "New" located to the right of the drop-down menu. For now, you only need to give the new perimeter a name, as selecting the points for the perimeter on the map comes later. Next, you may specify other policy parameters via the other input fields of the pop-up menu, which we will address one by one. To start, there is the option to define the controller of the task (FollowPathLoosely or FollowPathStrictly) each with its own characteristics that is reflected in the name of the controller. Next, there is the distance to the path in [meters] that the robot is allowed to steer away from what it had planned, the maximum speed at which the task should be executed (limited by 1.5 [m/s], although it will already drive very slowly with the "Strictly" controller), the Number of Contours that follow the edge of the area either inside or inside the perimeter (the default value is that two outside contour paths are planned and no inner contours), the direction of the line pattern of the planned paths being in degrees with respect to the mathematical x-axis of the global frame (or the horizontal lines in the map of Cerebra Studio), and whether or not to use a linear infill approach where linear infill implies the lawnmower pattern and no linear infill implies a spiral path aligning with the contour of the perimeter. The last input field is whether or not to use the localization map, which for indoor operations is necessary to plan a path and avoid obstacles present on that map. If you don't specify their values, then the default ones will be used.
Now you have covered all input fields of the Cover area task, and thus you should click the button "Add" on the bottom left of the pop-up menu. The pop-up menu will leave and you will see you newly added Cover area task in the task-list of the job (top-right corner of the map). As a final step you should select the position of the points of the perimeter's contour, or polygon, itself. Simply click on the map with your mouse and a point of the perimeter will be dropped at that location (see the illustration on the right below). You may change the location of points by dragging and dropping it with your mouse. The perimeter will appear as you click a next point and a next, up until you are happy with the area of the perimeter. The starting point of the path is indicated with a green point, while its destination is in red.
Warning
Since the robot is operating indoors we strongly advice to set the controller to "FollowPathStrictly", so that the robot does not get stuck when making U-turns, and ensure that the localization Map is being used, so that no paths are planned through occupied areas.
Definition of policy parameters |
Selecting the area on the map |
---|---|
If you then execute the task, by clicking on the "play" button directly on the right of the job's name, then the robot will execute this Cover area, and you Cerebra Studio environment will display something that is similar to the video below. More on Job execution and what you may expect from that is presented later on this page.
This task is useful when you want to sweep a larger areas autonomously.
This task is not (always) useful when there are many ad-hoc obstacles in the operation area, since the robot may stop in front of these obstacles will abort the task in case these objects are not removed within 10-15 seconds.
Info
Although the next section continues with the immediate execution of the job that was defined, you may also schedule a moment in time at which the job should be executed. For that, you will need to create a new schedule and link that to the job. You may find the tab for creating this schedule on the bottom of the Mission Planner in Cerebra Studio, where it states "Plan" on the left and "Execute" on the right. The "Plan" tab was used to define the job. Therefore, reverting to the "Execute" tab will enable you to define a schedule which jobs is to be executed at what moment in time. Please continue reading Cerebra Studio (Mission Planner) for more information..
Job execution
Warning
It is sensible to keep the remote e-stop in your hand when executing a job, so that you may press it immediately when you need to.
The job is now stored in Cerebra Studio and we need to pass that information to the robot so that it knows what to execute. As was already mentioned in the section on job preparation, there is a small "play" button-icon right next to the name of your job in the top-right corner of the map. Clicking that button will sent the job to the robot and trigger it for execution. In this section we will present some background information about the status indicators of Cerebra Studio and about specific behavior of the robot that is either related to shard corners, localization, or to obstacle avoidance.
Status indicators |
|
---|---|
When the robot is booted up in front of a marker, then under normal circumstance its localization should be accurate and it is awaiting for a user to specify the behaviors it needs to execute. These behaviors are specified as soon as you execute the job you have defined. | |
When the robot is booted up in front of a marker and it is waiting for behaviors, as mentioned above, and you click on the "play" button-icon in Cerebra Studio, then before actually executing the job you will first receive a pop-up with a question whether the job should indeed be uploaded to the robot. | |
When you have uploaded the job to the robot, as mentioned above, then it might be the case that the robot will not start driving and you will see the status indicators notifying that the Autopilot Inference is waiting for control of the robot. This can happen when you have manually driven to robot and clicked for execution of the job but you have not released the control of the robot and it is still in MANUAL mode. This can be solved by clicking the "X" button on your remote controller, which will release the command of the robot so that it can be acquired by the Autopilot Inference. | |
When the robot is executing a job and driving, then it may notify the user that it has detected that it it driving inside a known map and that it is improving its position estimate using the LiDAR data and the map. Typically, the robot will then also indicate that it has a high pose accuracy. | |
When the robot is booted up while it is not in front of a marker, then the Autopilot Inference will wait in a state until an accurate location has been attained. The status indicators will then show that it is waiting for a proper position as the robot has a low position accuracy. Simply driving the robot in front of a marker will solve the issue and continue the initialization process. | |
When you entered a status of the robot similar to the one mentioned above, and you have driven the robot in front of a marker, then before the robot will have been completely initialized, there is a in between status of the robot. In this in between status the status indicators of Cerebra Studio will mention that, on the one hand the robot is waiting for a proper position, while on the other hand it already obtained a high pose accuracy. | |
When you have connected the robot to Cerebra Studio then it might be the case that Cerebra Studio was not able to retrieve the status of the robot, which is indicated with status "unknown" |
Sharp corners
The default controller of the robot is the FollowPathLoosely. This controller is tuned to avoid small and medium sized obstacles most easily. To do so it will plan a number of alternative paths ahead, some 2-3 meters ahead, and it will choose the one that seems most promising. For smooth paths this works great, but for path with sharp corners, i.e., 45 degrees or less, or U-turns, this controller has difficulty. The robot will have difficulty in selecting which alternative path to take, which for sharp corners results in a nervous motion behavior of the robot while for U-turns the robot might even fail to make any progress at all (Typically when two right or left turns of 90 degrees are specified on the path within 1 meter apart).
Nervous at sharp corner |
Almost failing at U-turn |
---|---|
Localization
While executing the job the robot will keep track of its position, which is also shown on the localization map displayed in Cerebra Studio. To keep track of its position the robot exploits the following pieces of information.
- Odometry information, i.e., position relative to where the robot booted, which is obtained from IMU sensors and wheel encoders.
- Marker information, i.e., position with respect to the global frame, which is obtained by transforming the position of the robot with respect to the Aruco marker into a position with respect to the global frame given prior knowledge on the position of the Aruco marker in the global frame.
- Map information, i.e., position with respect to the global frame, which is obtained by matching the LiDAR scan of the robot to a pre-recorded, localization map of the environment, and thereby estimating the position of the robot in that map whenever the robot is set to be within the boundary of the localization map (see creating a map). The position of the robot in the map is then turned into a position of the robot in the global frame, since the Aruco marker that was used when creating the localization map has a known position in the global frame and, hence, also the origin of the map is also known in the global frame.
Info
The robot is delivered with prior knowledge that the Aruco Marker with ID = 0 is positioned that the mathematical origin of the global frame. So, by default, the location at which this Aruco marker 0 is located also defines the mathematical origin of the global frame that is used by the robot. If you wish to update the position of the other Aruco markers that you are using so that they match a different position in the global frame, then please continue reading background information of the Autopilot Inference.
Inaccurate position
It may happen that the robot becomes too inaccurate on its position estimate, at which it will pause, and possible stop, its job to wait until the position is accurate again. This can be expected when the robot is not able to match the LiDAR scan to a localization map. For example when the map is very large, or when the robot drove off the map.
Best to do then is to manually drive the robot back in front of the initial Aruco marker and update the job to avoid the situation,. Another option would be to add an Aruco marker to the knowledge base of the robot, i.e., specify its marker ID and position in the global frame, and change the job so that the robot will have to wait facing towards the Aruco marker and may thus detect it and use its known position to keep update the position of the robot.
Incorrect position
It may happen that the robot drives in a completely wrong direction when there is no reason for doing so, i.e., no obstacles blocking its path to its destination. Most likely this is caused by a incorrectly estimated position of the robot. Although this might be due to the matching of the LiDAR scan with the localization map, it is more likely that the robot encountered an Aruco marker of which it has a known position in its knowledge base but that position does not comply with the location of the marker in the real world. To solve this issue ne should only use the initial Aruco marker that is linked to the localization map, and ensure that there are not other Aruco markers visible to the robot in the operational area unless they have been deliberately located in that area and added to the knowledge base of the robot (with a correct position in the global frame).
Obstacle avoidance
While executing the job the robot will avoid obstacles that are not present on its localization map but are ad-hoc encountered. If and how the robot shall avoid obstacles depends on the task and policy that was set by the user. Hereto, we distinguish between tasks that have a path constraint, such as Follow Path and Cover Area, and tasks that do not have a path constraint, such as Go To Waypoint.
Warning
Since the robot uses the LiDAR for detecting obstacles, and a LiDAR is incapable of detected glass, such as windows, you should not expect that the Origin One will avoid glass walls and glass obstacles. So make sure that these cannot be accidentally damaged by your Origin One. If you want your robot to avoid glass walls, try masking the glass with window foil (matt or frosted) for the first 30 cm.
Navigating with path constraint
When the robot is constrained to a path the robot automatically assumes that some distance to the path should be maintained. One may set such distance as user, but even without setting this value the robot will try to maintain to drive within some 0.5 up to 1 meter from this path. Therefore, the robot will not be able to avoid large obstacles but instead avoid small- and sometimes medium-sized obstacles.
avoiding small-sized obstacles |
avoiding medium-sized obstacles |
avoiding large-sized obstacles |
---|---|---|
Small-sized obstacles, i.e., up to 10-15 cm are typically avoided by the robot while keep on driving forwards. | Medium-sized obstacles, i.e., up to 50 cm, are typically avoided by slowing down in front of the obstacle, assess the situation by trying bits left and right of the obstacle, and then choose to pass the obstacle either left or right. Whether the object is avoided will depend on how far the robot is allowed to drive away from the path constraint. | Large-sized obstacles are typically not avoided. The robot will slow down and assess the situation by trying left and right from the path. But after some 10 seconds, where the exact time depends on the situation and how hard the robot is trying, the robot will give up an abort that task to continue to the next. |
Navigating without path constraint
When the robot is not constrained to a path, then the robot will automatically plan a path by itself using a combination of the prior recorded, localization map, or floor plan, of the environment and obstacles that it had encountered during the current and last task it executed (if that task was executed immediately prior to the current task). One may further set a distance to the path when defining this task, at which the robot will try to maintain this distance along the path it planned for itself. But notice that, since no path constraint is provided, that the robot may replan its path any time it decides to so so (as long as the path is on the map). Therefore, the robot will not be able to avoid small-, medium- and large-sized obstacles, although there is up upper limit of about 3 meters.
avoiding small-sized obstacles |
avoiding medium-sized obstacles |
avoiding large-sized obstacles |
---|---|---|
Small-sized obstacles, i.e., up to 10-15 cm are typically avoided by the robot while keep on driving forwards. | Medium-sized obstacles, i.e., up to 50 cm, are typically avoided by slowing down in front of the obstacle, assess the situation by trying bits left and right of the obstacle, and then choose to pass the obstacle either left or right. | Large-sized obstacles, up to 3 meters, are typically avoided by slowing down and assess the situation to try left and right from from the obstacles. But after some 10 seconds, where the exact time depends on the situation and how hard the robot is trying, the robot will replan an entire new path. The new path may be planned badly, for example again going through the obstacle if it is really large, after which the same avoidance sequence is repeated by the robot. TYpically, the robot will find a passage around the obstacle but occasionally it might get stuck in a deadlock and after 3 attempts it will abort the task and continue ot the next. |
Info
In case the obstacle is present on the localization map, or floor plan, that is used by the robot for planning, than the robot will plan a path around the obstacle no matter how large the obstacle is.