Pointing Gestures for Human-Robot Proximity Interaction

Pointing Gestures for Human-Robot Proximity Interaction

This project targets the problems of interaction with robots in human’s proximity, within the direct line of sight.

We study the ways to capture and use human pointing gestures with the help of wearable sensors. This approach allows to avoid the typical problems of computer vision systems, such as occlusions, low illumination, and the need for extensive computational resources. The main drawback of this approach is the lack of localization between the user and the robot—the knowledge that is usually available for “free” with vision systems. To solve it, we require the user to point at and follow the autonomously moving robot for a few seconds: during that stage we collect synchronized pairs of pointing rays and robot positions. We then align the pointing rays with robot’s trajectory to estimate the coordinate frame transformation between the agents.

An overview of our approach is shown in the following video:

For more details, please refer to the publications section at the bottom of the page.

Live Demo @HUBweek (Boston, MA)

We demonstrated our system live on October 8-9, 2018 at the Aerial Futures: The Drone Frontiers event during the HUBweek in Boston, MA, USA.

This implementation uses the MetaMotionR+ bracelet (similar to a smartwatch) and the Crazyflie 2.0 drone. Note that we do not use any external localization system, the drone is controlled directly in its (visual) odometry frame.

Awards @HRI 2019 (Daegu, South Korea)

A new version of the demo and the video were received very well at the Human-Robot Interaction (HRI) 2019 conference that took place on March 11-14, 2019 in Daegu, South Korea. We acquired both the Best Demo and honorable mention (Best Video according to reviewers) awards!

Acknowledgment

This work was partially supported by the Swiss National Science Foundation (SNSF) through the National Centre of Competence in Research (NCCR) Robotics.

Publications

  1. B. Gromov, G. Abbate, L. Gambardella, and A. Giusti, “Proximity Human-Robot Interaction Using Pointing Gestures and a Wrist-mounted IMU,” in 2019 IEEE International Conference on Robotics and Automation (ICRA), 2019, pp. 8084–8091.
  2. B. Gromov, J. Guzzi, G. Abbate, L. Gambardella, and A. Giusti, “Video: Pointing Gestures for Proximity Interaction,” in HRI ’19: 2019 ACM/IEEE International Conference on Human-Robot Interaction, March 11–14, 2019, Daegu, Rep. of Korea, 2019.
  3. B. Gromov, J. Guzzi, L. Gambardella, and A. Giusti, “Demo: Pointing Gestures for Proximity Interaction,” in HRI ’19: 2019 ACM/IEEE International Conference on Human-Robot Interaction, March 11–14, 2019, Daegu, Rep. of Korea, 2019.
  4. B. Gromov, L. Gambardella, and A. Giusti, “Robot Identification and Localization with Pointing Gestures,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 3921–3928.
  5. B. Gromov, L. Gambardella, and A. Giusti, “Video: Landing a Drone with Pointing Gestures,” in HRI ’18 Companion: 2018 ACM/IEEE International Conference on Human-Robot Interaction Companion, March 5–8, 2018, Chicago, IL, USA, 2018.
  6. D. Broggini, B. Gromov, L. M. Gambardella, and A. Giusti, “Learning to detect pointing gestures from wearable IMUs,” in Proceedings of Thirty-Second AAAI Conference on Artificial Intelligence, February 2-7, 2018, New Orleans, Louisiana, USA, 2018.
  7. B. Gromov, L. M. Gambardella, and G. A. Di Caro, “Wearable multi-modal interface for human multi-robot interaction,” in Safety, Security, and Rescue Robotics (SSRR), 2016 IEEE International Symposium on, 2016, pp. 240–245.