Cognition
Components that give the robot the ability to acquiring knowledge and understanding
through thought, experience, and the senses.
Introduction
One of the 5 high-level components of the Autopilot Inference is the Cognition stack. The Cognition stack should ensure that the robot is able to understand its environment and whether or not it is able to execute its tasks therein, i.e., whether it will be effective. It can gain understanding by relating the real-world concepts that it uses for perception, navigation, interaction and ordination. A task, for example, is often related to a specific location in the real world, and thus also to a specific position on one of the maps that the robot stored locally. Another example of such a relation is that, depending on the location, the robot might be required to uphold a certain policy when executing a task, such as a maximum speed or minimum height. Therefore, the Cognition stack exploits actual information that the robot acquired from the Perception stack and the Navigation stack, which is then used in advanced reasoning to infer an internal representation of the real world, by which the robot is able to make high-level decisions on whether it is capable of executing tasks. Which tasks it will execute is provided by the user via a so-called “job”, which will be explained in more detail later. In return, the Cognition stack provides status information back to a user, such as progress of a tasks and the whereabouts of the robot. How this Cognition stack is supporting the other high-level functionalities, i.e., the Perception, Navigation, Interaction and Ordination stack, will become clear in the subsequent sections of this page on the Cognition stack.
Capabilities
The capabilities that the Cognition stack provides have been designed to support the robot in gaining a deeper understanding of the real world in which it is operating its tasks Therefore, the capabilities of the Cognition stack depend on what is needed to gain such understanding.
Querying for requested information is a capability in which the robot is able to provide a piece of information that was requested by a functionality within the robot or by the user. For that the Cognition stack defines so-called data-concepts, which can be seen as a data-object or a data-class in which properties and relations to other data-concepts are stored. The data-concept of the Cognition stack are listed in the table below. Note that the Cognition stack is for managing mission relevant data and information, and thereby will not store, for example, every camera.
data concept | description |
---|---|
Job | A list of tasks with a schedule indicating when the job is to be started. |
Task | semantic description on what the robot is asked to do, such as, GoTo, Dock, or Undock. Tasks may have relations to Constraints and Policies. |
Constraint | A limitation on the autonomy of the robot defined by the user, thereby limiting its decision space, as to where, when and how a task shall be executed. Examples are constraints on paths to follow, or geo-fences (Go-areas and NoGo-areas). |
Policy | A limitation on the autonomy of the robot defined by the situation, thereby limiting its decision space typically as to how a task shall be executed. Examples are a max speed and a max height. |
Behavior | A specification of top-level capabilities of the robot. Behaviors can be mapped to task with constraints and executed by the robot, such as MoveThroughPoses, MoveAlongPath, MoveInDock, MoveOutDock and InformUser. |
Map | An occupancy grid of the environment in which the robot operates. Occupancy maps should only contain static objects. |
Path | A specification of how a robot should move between waypoints. |
Artifact | A real-world object that is relevant the operation, such as its dock, an Aruco marker, or even a waypoint. |
Managing operational relevant information is a capability in which the robot is able to manage different pieces of information, i.e., data concepts, obtained from different sources. One source is the user, which will need to share information about jobs, tasks, constraints and policies, as well as information about Aruco markers and waypoints. Another source of information is the robot itself, which is processing its sensor data in order to gain enriched information about to robot’s current environment. Information such as recordings of a map, used for localization, or a recording of a path of the robot, used as a constraint to a future task. In future, also information about relevant objects in the real world and their properties will be managed. The example illustrated below first depicts a new path that was recorded by the robot at time T1, which is saved into the database of the Cognition stack with some unique ID="p3". So when a user makes makes a request to see all paths available on the robot, then it will receive the information of this new path "p3". Then, at time T2, the user defined a new job of one GoTo task that is constrained in the sense that the robot must follow the path that is know to the robot with the ID = "p3". The database is therefore updated with this new task, that was given ID="t6", and with a new constraint linking the task to the path. So that the robot will know how the user wants to execute the task.
Info
Avular collaborators are authorized to read the details of the developments, which is found on the Development-Cognition