High level design
The autonomous capabilities, and whether or not the robot is authorized to exploit them, are embedded in the design of the Autopilot Inference. The design defines a particular software configuration of high-level, functional components. In line with many other reference architectures, such as observe-orient-decide-act and MAPE-K, our Autopilot Inference has 5 main functionalities that interact with each other, with a user and with the robot, as is illustrated below.
- The user may interact with the Autopilot Inference either via our mission planner, called Cerebra Studio, or via the ROS2.
- The Autopilot Inference will interact with the physical robot through a so-called "Origin BringUp", which is a gateway for receiving sensor data, such as camera images and LiDAR pointclouds, and for sending reference velocities.
high level functionality | theoretical meaning | current examples |
---|---|---|
Cognition | The mental action or process of acquiring knowledge and understanding through thought, experiences | Managing data and information; Decomposing jobs and tasks into behaviors |
Ordination | The mental action or process of putting something in order | Behavior Execution; Software node management |
Perception | The ability to see, hear, or become aware of something through the senses | Object detection, Object tracking |
Navigation | The process or activity of accurately ascertaining one's position and planning and following a route | Localization, Path planning, Path following |
Interaction | Communication with someone or manipulation of something | Inform user |
We distinguish two phases in the user flow: deployment-time (the time that the robot is being prepared for deployment) and run-time (the time in which the robot is running its operation).
A typical workflow starts with deployment-time, in which a user interacts with the cognition functionality of the Autopilot. For example, a user provides a list of tasks the robot needs to execute, and provides any policies or constraints the robot needs to take into account.
In the meantime, the robot was turned on and is awaiting to start its operation either in front of an Aruco marker. The Perception stack will detect the Aruco marker, so that the Navigation functionality is able to estimate robot’s initial position. The position is defined with respect to a global origin, which is a point is space either determined by a known map or by a known latitude-longitude. When outside, an initial position of the robot is estimated after driving for 5 meters manually while interpolating hte RTK-GNSS measurements (or fixes).
Once the user starts the execution of the tasks, then the robot is in run-time. During this time the Cognition functionality will acquire and share information with the other functionalities, as is illustrated by the solid lines in the above picture. In addition, the Cognition functionality will decompose the list of tasks (what to execute), into a list of behaviors (how to execute). The Ordination functionality will poll the next behavior in this list, which is then executed by scheduling services and actions in the Perception, Navigation or Interaction functionality, as is illustrated by the dashed lines in the above picture. A behavior is something that the robot is able to do, either in the real world, or with raw measurement data, within a limited amount of time. For example, moving to a waypoint, or taking a snapshot. The perception, navigation and interaction functionalities each inform the ordination functionality which behaviors it supports, so that the ordination functionality may call upon them when needed.
In what follows, we will first present a more detailed account on the operational principles of the robot, after which we continue with an in-depth explanation of each high-level functionality.
Info
You may either continue to some of the operational principles of the Origin One with Autopilot Inference, or to a more detailed description of the 5 different high-level functionalities starting with the Cognition stack.