Human-like (HL)

Contents

Human-like (HL)#

Guzzi, J., Giusti, A., Gambardella, L.M., Theraulaz, G. and Di Caro, G.A., 2013, May. Human-friendly robot navigation in dynamic environments. In 2013 IEEE international conference on robotics and automation (pp. 423-430). IEEE.

Our navigation behavior that mimic the model for pedestrains described in

Moussaïd, Mehdi, Dirk Helbing, and Guy Theraulaz. “How simple rules determine pedestrian behavior and crowd disasters.” Proceedings of the National Academy of Sciences 108, no. 17 (2011): 6884-6888.

Human-like performs a search for the direction in which the agents would come nearest to the target before possible collisions. Then is applies an heuristic to select a safe speed in that direction. By construction, Human-like never retract because for it staying in place is always preferable (i.e., nearest to the target) than move away.

Example#

The video has been recorded using

$ navground_py record_video hl.yaml hl.mp4 --factor 5

with the following configuration

steps: 1000
time_step: 0.1
scenario:
  type: Cross
  agent_margin: 0.1
  side: 4
  target_margin: 0.1
  tolerance: 0.5
  groups:
    -
      type: thymio
      number: 20
      radius: 0.08
      control_period: 0.1
      speed_tolerance: 0.02
      kinematics:
        type: 2WDiff
        wheel_axis: 0.094
        max_speed: 0.166
      state_estimation:
        type: Bounded
        range: 5.0
      behavior:
        type: HL
        optimal_speed: 0.12
        horizon: 5.0
        safety_margin: 0.02