The general topic of the workshop deals with representations for bridging the gap between sub-symbolic low-level robotics and vision domain and the high-level symbolic AI domain. Traditional psychological and artificial intelligence models of natural cognition studied cognition mainly at the symbolic level. However, there is still a significant gap between high-level symbolic and low-level sensorimotor representations. In this workshop we propose to bring together researchers from both areas to discuss approaches for combining continuous domain approaches with discrete representations. We also present the OAC concept that has been proposed within the European project PACO-PLUS as a mathematical formalization for bridging this representional gap. In particular, this formalization tackles the problem of grounding of perception by action and grounding of language through the interaction.
The aim of the OAC concept is to emphasize the notion that objects and actions are
inseparably intertwined and that categories are therefore determined (and also limited) by
the action a cognitive agent can perform and by the attributes of the world it can perceive.
Entities "things" in the world of a robot (or human) will only become semantically useful
objects through the action that the agent can/will perform on them.
OACs are proposed as a universal representation enabling efficient planning and execution
of purposeful action at all levels of a situated architecture. OACs combine the
representational and computational efficiency for purposes of search (the frame problem) of
STRIPS rules and the object- and situation-oriented concept of affordance with the logical
clarity of the event calculus. Affordance is the relation between a situation, usually including
an object of a defined type, and the actions that it allows. While affordances have mostly
been analyzed in their purely perceptual aspect, the OAC concept defines them more
generally as state-transition functions suited to prediction. Such functions can be used for
efficient forward-chaining planning, learning, and execution of actions represented
simultaneously at multiple levels in an embodied agent architecture.