Humans naturally interact and collaborate in unstructured social environments that produce an overwhelming amount of information and may yet hide behaviourally relevant variables.
Finding the underlying design principles that allow humans to adaptively find and select relevant information is important for Robotics but also other fields, such as Computational Neuroscience, Interaction Design, and Computer Vison.
Current solutions cover specific tasks, e.g. autonomous cars, and usually employ over-redundant, expensive, and computationally demanding sensory systems that attempt to cover the wide set of sensing conditions which the systems may have to deal with. A promising alternative is to take inspiration from the brain. Adaptive control of sensors and the perception process is a key solution found by nature to cope with computational and sensory demands, as shown by the foveal anatomy of the eye and its high mobility.
Alongside this application of “active” vision, collaborative robotics has recently progressed to human-robot interaction in real manufacturing.
Partners’ gaze behaviours are a crucial source of information that humans exploit for collaboration and coordination. Thus measuring and modelling task-specific gaze behaviours seems to be essential for smooth human-robot interaction. Indeed, anticipatory control for human-in-the-loop architectures, which can enable robots to proactively collaborate with humans, could gain much from parsing the gaze and actions patterns of the human partners.
We are interested in manuscripts that present novel, brain inspired computational and robotic models, theories and experimental results as well as reviews relevant to these topics. Submissions should further our understanding of how humans actively control their perception during social interaction, in which conditions they fail, and how these insights may enable natural interaction between humans and embodied artificial systems in non-trivial conditions.
Update: Due to COVID19 RO-MAN 2020 and AVHRC 2020 are going virtual.
The transformation of the workshop into a virtual event guarantees greater flexibility. Therefore, the deadline for contributions to AVHRC 2020 - Collaborative workshop on active vision and perception in humans (-Robot) - The virtual conference has been extended to 17 July to allow more refined contributions.
Two types of submissions are invited to the workshop: long papers (6 to 8 pages + n references pages) and short papers (2-4 pages + n references pages). In both cases there is no page limit for the bibliography/references (n pages) section.
All submissions should be formatted according to the standard IEEE RAS Formatting Instructions and Templates.
Authors are required to submit their papers electronically in PDF format. At least one author of each accepted paper must register for the workshop.
For any questions regarding paper submission, please email Dr Dimitri Ognibene (firstname.lastname@example.org).
Papers will be presented in short talks and/or poster spotlights.
The organisers would like to reassure authors that, independently of any potential restriction due to the COVID-19 situation, it will be possible to present all accepted papers and to attend the keynotes, either in person or remotely, following the same rules and the same procedure of the main conference.
At what is a difficult time for many people, we look forward to sharing our work with the community despite any restrictions and we invite interested colleagues to join us. The organisers of Ro-man 2020 will announce further details in due course.
All accepted papers will be published on the workshop website.
Selected papers will be published in a dedicated special issue of Frontiers in Neurorobotics a high quality open access journal.
A best paper award will be announced, offering a full publication fee waiver.
We have created a list of potential topics that would be suitable for a paper. If you have an idea for a topic that you think is relevant but isn't on the list please email Dr Dimitri Ognibene (email@example.com) to discuss it further.
Talk title: Predictive Vision in Human Robot CollaborationDirector of Research, Instituto Italiano Di Tecnologia (IIT)
Abstract: The use of perceptual information during human robot collaboration cannot be limited to reactive, realtime processes. Joint activities require shared goals and intentions and collaboration is based on proactive, anticipatory processes. The aim of the talk is to present how motion primitives in biological systems are mapped into visual features which implicitly embed intentions and internal state of agent and how such features can be exploited during human robot collaborative tasks.
Talk title: Modelling and imitation attentional behaviours in complex tasksProfessor of Computer Science, Universita di Roma la Sapienza
Abstract: Human visual exploration provides an important source of information to enable robot vision with task specific selection strategies necessary to deal with the complexity of real world. Techniques for recording and replication in robot of such strategies will be reviewed.
Talk title: Introduction to the Projective Consciousness ModelAssociate Professor of Psychology, University of Geneva
Abstract: Emergent psychology-inspired cybernetic frameworks for integrating perception, imagination, emotion, social cognition and action in global optimisation solutions for autonomous virtual and robotic agents.
Talk title: Theory of Mind for Trust and Intention Reading in Human Robot CollaborationProfessor of Machine Learning & Robotics, University of Manchester
Abstract: We present a developmental robotics model and a set of experiments on theory of mind for trust, intention reading and communication in Human-Robot Interaction. Taking inspiration from developmental psychology experiments on theory of mind and trust, the robot uses this developmental model to learn, via interaction with the users, about their intentions and reliability.
Talk title: Attention during social interactionReader in Psychology, University of Essex
Abstract: A review of work from experimental psychology which examines how humans pay attention to each other during conversation and other interactive situations. Studying this behaviour requires moving to more ecologically valid situations, and the results have implications for real and virtual interaction.
Talk title: A probabilistic tour of visual attention and gaze shift computational modelsFull Professor, Universita’ Statale di Milano
Abstract: In this talk a number of problems are considered which are related to the modelling of eye guidance under visual attention in a natural setting. First, from a brief and crude discussion of a variety of available models spelled in probabilistic terms, we show that current approaches in computational vision are hitherto far from achieving the goal of an active observer relying upon eye guidance to accomplish real-world tasks. Second, we argue that this challenging goal requires to embody, in a principled way, the problem of eye guidance within the action/perception loop. In particular we will consider the issue of how to design specific oculomotor priors (oculomotor tendencies, biases) and how this endeavor can be accomplished in the framework of animal foraging models. Eventually, to give some idea for future studies and directions in the field, we discuss the issue of facing the inextricable link tying up visual attention, emotion and executive control, in so far as recent neurobiological findings are weighed up.
A tutorial on active vision for human-robot collaboration was held at the 12th International Conference on Computer Visions (ICVS) in Thessaloniki in 2019.
The workshop will cover the multidisciplinary state-of-the-art about the role of adaptive vision and perception in collaboration with humans. It is intended for both students and more senior academics, but it will show constraints and issues that can be relevant for industry.
RO-MAN 2020 - The 29th IEEE International Conference on Robot and Human Interactive Communication is a leading forum where state-of-the-art innovative results, the latest developments as well as future perspectives relating to robot and human interactive communication are presented and discussed.
The conference covers a wide range of topics related to Robot and Human Interactive Communication, involving theories, methodologies, technologies, empirical and experimental studies. Papers related to the study of the robotic technology, psychology, cognitive science, artificial intelligence, human factors, ethics and policies, interaction-based robot design and other topics related to human-robot interaction are welcome.
Dimitri Ognibene is a Lecturer in the School of Computer Science and Electronic Engineering at the University of Essex, UK & University of Milano-Bicocca, Italy. His main interest lies in understanding how social agents with bounded sensory and computational resources adapt to complex and uncertain environments. To this end he develops both neural and Bayesian algorithms and applies them both in physical, e.g. robots, and virtual, e.g. social media, settings.
We would like to thank our sponsors for supporting this workshop.