Dangerous HRI: Testing Real-World Robots has Real-World Consequences

HRI Workshop, Daegu, South Korea, March 11, 2019

Robotic rescuers digging through rubble, fire-fighting drones flying over populated areas, robotic servers pouring hot coffee for you, and a nursing robot checking your vitals are all examples of current or near-future situations where humans and robots are expected to interact in a dangerous situation. Dangerous HRI is an as-yet understudied area of the field. We define dangerous HRI as situations where humans experience some amount of risk of bodily harm while interacting with robots. This interaction could take many forms, such as a bystander (e.g. when an autonomous car waits at a crossing for a pedestrian), as a recipient of robotic assistance (rescue robots), or as a teammate (like an autonomous robot working with a SWAT team). To facilitate better study of this area, the Dangerous HRI workshop will bring together researchers who perform experiments with some risk of bodily harm to participants and discuss strategies for mitigating this risk while still maintaining validity of the experiment. This workshop does not aim to tackle the general problem of human safety around robots, but instead will focus on guidelines for and experience from experimenters.

Important Dates

Paper submission: January 15, 2019 DEADLINE EXTENDED: February 20, 2019
Acceptance/rejection: February 1, 2019 February 28, 2019
Final version of papers: February 8, 2019 March 7, 2019

Workshop: March 11, 2019

Desired Outcomes of This Workshop

  • Develop best practices for HRI experiments in dangerous situations
  • Share experiences with other researchers already exploring this area
  • Define what makes an interaction dangerous to allow future researchers to better understand the risks of potential experiments 
  • Build and nurture a new community that bridges researchers from different backgrounds


We welcome submissions formatted as extended abstracts (2 pages, IEEE format), position papers (2-6 pages, IEEE format) or whitepapers (open format) that address past experiences testing or deploying robots in a situation where participants could be harmed or similar-format papers that address guidelines for such experiments beyond the scope of normal IRB training.  Each paper will be assigned two reviewers. Below are some suggested topics, but submissions are welcome that range outside of these topics as well:

  • Dangerous human-robot environments (undersea, mining, space)
  • Military and defense environments
  • Dangerous human-robot collaborations (rescue, autonomous surgery, automated pilots)
  • What constitutes too much risk in an experiment?
  • Mitigating injury in experiments
  • Hidden dangers of HRI experiments

Papers should be submitted through EasyChair: https://easychair.org/conferences/?conf=dangeroushri2019

If there are any issues with submissions, please contact Paul Robinette, paulrobi@mit.edu


Organizing Committee

Paul Robinette paulrobi@mit.edu

Michael Novitzky novitzky@mit.edu

Brittany Duncan bduncan@unl.edu

Myounghoon Jeon (Philart) myounghoonjeon@vt.edu

Alan Wagner alan.r.wagner@psu.edu

Chung Hyuk Park chpark@gwu.edu