Humans naturally interact and collaborate in unstructured social environments that produce an overwhelming amount of information and may yet hide behaviourally relevant variables.
Finding the underlying design principles that allow humans to adaptively find and select relevant information is important for Robotics but also other fields, such as Computational Neuroscience, Interaction Design, and Computer Vision.
Current solutions cover specific tasks, e.g. autonomous cars, and usually employ over-redundant, expensive, and computationally demanding sensory systems that attempt to cover the wide set of sensing conditions which the systems may have to deal with. A promising alternative is to take inspiration from the brain. Adaptive control of sensors and the perception process is a key solution found by nature to cope with computational and sensory demands, as shown by the foveal anatomy of the eye and its high mobility.
Alongside this application of “active” vision, collaborative robotics has recently progressed to human-robot interaction in real manufacturing.
Partners’ gaze behaviours are a crucial source of information that humans exploit for collaboration and coordination. Thus measuring and modelling task-specific gaze behaviours seems to be essential for smooth human-robot interaction. Indeed, anticipatory control for human-in-the-loop architectures, which can enable robots to proactively collaborate with humans, could gain much from parsing the gaze and actions patterns of the human partners.
This workshop was held over two days as part of the RO-MAN 2020 conference. The talks and papers delivered covered a range of topics including social interaction and navigation between robots and humans, human-robot collaboration, and robot vision.
Talks and papers presented during the workshop have been published below on an open access basis.
Selected papers will be published in a dedicated special issue of Frontiers in Neurorobotics a high quality open access journal.
We would like to thank everyone who submitted papers for consideration, as well as our invited speakers and everyone who presented their research during the workshop.
Professor Giulio Sandini - Predictive vision in Human Robot collaboration (.PDF)
Professor Fiora Pirri - Perception, activities and sustained attention when robots help humans (.PDF)
Professor Angelo Cangelosi - Developmental robotics: Language learning, trust, and theory of mind (.PDF)
Dr Tom Foulsham - Attention during social interaction (.PDF)
Professor Giuseppe Boccignone - A probabilistic tour of visual attention and gaze shift computational models (.PDF)
O. Eldardeer, G. Sandini, F. Rea, Cognitive Models of Multi-sensory Joint Attention in human robot collaborative tasks (.PDF) (presentation PDF).
Dano Roost, Ralph Meier, Giovanni Toffetti Carughi, and Thilo Stadelmann, Combining Reinforcement Learning with Supervised Deep Learning for Neural Active Scene Understanding (.PDF) (presentation PDF).
Jun Kwan, Chinkye Tan, and Akansel Cosgun, Gesture Recognition for Initiating Human-to-Robot Handovers (.PDF) (presentation PDF).
Stefan Fuchs, and Anna Belardinelli, Gaze-based intention recognition for pick-and-place tasks in shared autonomy (.PDF)
Ziwen. Jiang, Naizheng. Tang, Lixin. Xu, Steffi. Hußlein, Speculating on the behaviors of the blind people in communication with others to AI. (.PDF)
Natalie Friedman, David Goedicke, Vincent Zhang, Dmitriy Rivkin, Michael Jenkin, Ziedune Degutyte, Arlene Astell, Xue Liu, and Gregory Dudek, Out of my way! Exploring Different Modalities for Robots to Ask People to Move Out of the Way. (.PDF) (presentation PDF).
Talk title: Predictive Vision in Human Robot CollaborationDirector of Research, Instituto Italiano Di Tecnologia (IIT)
Abstract: The use of perceptual information during human robot collaboration cannot be limited to reactive, realtime processes. Joint activities require shared goals and intentions and collaboration is based on proactive, anticipatory processes. The aim of the talk is to present how motion primitives in biological systems are mapped into visual features which implicitly embed intentions and internal state of agent and how such features can be exploited during human robot collaborative tasks.
Talk title: Modelling and imitation attentional behaviours in complex tasksProfessor of Computer Science, Universita di Roma la Sapienza
Abstract: Human visual exploration provides an important source of information to enable robot vision with task specific selection strategies necessary to deal with the complexity of real world. Techniques for recording and replication in robot of such strategies will be reviewed.
Talk title: Introduction to the Projective Consciousness ModelAssociate Professor of Psychology, University of Geneva
Abstract: Emergent psychology-inspired cybernetic frameworks for integrating perception, imagination, emotion, social cognition and action in global optimisation solutions for autonomous virtual and robotic agents.
Talk title: Theory of Mind for Trust and Intention Reading in Human Robot CollaborationProfessor of Machine Learning & Robotics, University of Manchester
Abstract: We present a developmental robotics model and a set of experiments on theory of mind for trust, intention reading and communication in Human-Robot Interaction. Taking inspiration from developmental psychology experiments on theory of mind and trust, the robot uses this developmental model to learn, via interaction with the users, about their intentions and reliability.
Talk title: Attention during social interactionReader in Psychology, University of Essex
Abstract: A review of work from experimental psychology which examines how humans pay attention to each other during conversation and other interactive situations. Studying this behaviour requires moving to more ecologically valid situations, and the results have implications for real and virtual interaction.
Talk title: A probabilistic tour of visual attention and gaze shift computational modelsFull Professor, Universita’ Statale di Milano
Abstract: In this talk a number of problems are considered which are related to the modelling of eye guidance under visual attention in a natural setting. First, from a brief and crude discussion of a variety of available models spelled in probabilistic terms, we show that current approaches in computational vision are hitherto far from achieving the goal of an active observer relying upon eye guidance to accomplish real-world tasks. Second, we argue that this challenging goal requires to embody, in a principled way, the problem of eye guidance within the action/perception loop. In particular we will consider the issue of how to design specific oculomotor priors (oculomotor tendencies, biases) and how this endeavor can be accomplished in the framework of animal foraging models. Eventually, to give some idea for future studies and directions in the field, we discuss the issue of facing the inextricable link tying up visual attention, emotion and executive control, in so far as recent neurobiological findings are weighed up.
This workshop was held as part of the The 29th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN 2020), a leading forum where state-of-the-art innovative results, the latest developments as well as future perspectives relating to robot and human interactive communication are presented and discussed.
The conference covered a wide range of topics related to Robot and Human Interactive Communication, involving theories, methodologies, technologies, empirical and experimental studies. Papers related to the study of the robotic technology, psychology, cognitive science, artificial intelligence, human factors, ethics and policies, interaction-based robot design and other topics related to human-robot interaction were presented.
We would like to thank everyone listed in the sections below for generously giving their time to help make this workshop a success.
Dr Dimitri Ognibene is a Lecturer in the School of Computer Science and Electronic Engineering at the University of Essex, UK & University of Milano-Bicocca, Italy. His main interest lies in understanding how social agents with bounded sensory and computational resources adapt to complex and uncertain environments. To this end he develops both neural and Bayesian algorithms and applies them both in physical, e.g. robots, and virtual, e.g. social media, settings.
We would like to thank our sponsors for supporting this workshop.