We thank all Invited Speakers, Contributed Talks and Listeners to attend to our ICRA 2021 Workshop!

The recordings of all talks are now online and can be found on our program page!

Welcome to the

ICRA 2021-Workshop on Semantic Representations for Robotics through Continuous Interaction and Incremental Learning

May 31, 2021 – 11:00 CET

Abstract – Robots coexisting with humans in their environment and assisting them in daily tasks should be able to i) learn from interaction with the physical world and humans in a continual way, ii) to interpret situations and reason about action effects, iii)  to adapt and generalize learned knowledge and skills to novel environments and tasks and iv) to communicate with humans in a natural and efficient way.  Such robots must be able to rapidly create new concepts and react to unanticipated situations in the light of previously acquired knowledge by making generative use of experience utilizing predictive processes. This process is largely driven by internal models based on prior experience. Thus, intelligent robots must also be able to continually learn from others by sharing generative, experience based knowledge through teaching and interaction.  Combining exploration-based learning, generative modelling and natural language understanding is fundamental for building interpretable shared representations that are understandable by robots and humans to bootstrap intuitive human-robot interaction and communication. 

Goal

Advances in Incremental Learning and Semantic Representations Learning have been developed in domain-specific communities including robotics, natural language processing, computer vision and reinforcement learning. This workshop is intended to bring together researchers from these communities to discuss synergies, current challenges and explore joint new research directions. 

We aim to improve the interaction and communication across a diverse set of scientists who are at various stages of their careers. Instead of the common trade-offs between attracting a wider audience with well-known speakers and enabling early-stage researchers to voice their opinion, we encourage each of our senior presenters to share their presentations with a PhD student or postdoc from their lab. We also ask all our presenters – invited and contributed – to add a “dirty laundry” slide, describing the limitations and shortcomings of their work. We expect this will aid further discussion in poster and panel sessions in addition to helping junior researchers avoid similar roadblocks along their path.

The workshop will include invited talks from renowned researchers in these fields, contributed talks and poster sessions. Sponsorship will be solicited from Toyota Research Institute, Google, Facebook and other organizations to fund contributed paper awards and travel awards.

Topics of Interest

The workshop will discuss a wide variety of topics such as (but not limited to):

  • Incremental learning and continuous adaptation of robot skills
  • Learning of multimodal semantic representations for robotics
  • Grounding natural language in action and perception 
  • Semantic action representation 
  • Verbalization of robot experience for human-robot interaction
  • Combining human demonstrations, natural language and unsupervised/self-learning for robot skill learning and adaptation 
  • Semantic scene understanding based on object affordances and spatial and temporal relation 
  • Multimodal memory representations of robot experience
  • Incremental learning for predicting scene dynamics and action effects
  • Unsupervised and self-supervised learning to gain experience
  • Reasoning about similarity between scenes, actions and tasks