There are two types of power analyses: a-priori and post-hoc. The difference between them primarily lies in the value being calculated.
With an a-priori power analysis, you are determining the minimum necessary number of participants, given the expected effect size (small, medium, large). The expected effect size can be established based on previous literature (e.g., meta-analyses, other similar work), or a pilot study. If there is no relevant work, common practice is to assume a small effect size by default.
Conversely, a post-hoc power analysis determines, given the sample size, what the minimum effect size able to be detected is. A small sample size usually means that only large effect sizes can be detected. In this case, non-significant results are harder to interpret, as it could mean that the effect exists but is too small to be detected with the current sample size.
Power analyses rely on roughly four parameters: alpha (0.05), beta/power, effect size, and n (number of participants). In an a-priori power analysis, you specify alpha, beta, and the expected effect size to calculate n. In a post-hoc power analysis, you specify alpha, effect size, and n to calculate beta.
Beta/power essentially refers to the probability of a type II error, or incorrectly accepting the null hypothesis. When you specify beta a-priori, the normal threshold is 0.8, meaning there is a 20% chance of incorrectly accepting the null hypothesis. Thus, the lower the power, the less reliable your results are. If you are getting low beta values, this means that non-significant results cannot really be interpreted (absence of evidence does not equal evidence of absence).
There is also some evidence to suggest that if you have low power and still find significant effects, these effects are also more likely to be Type I errors. Read more here.
It is uaully recommend to do an a-priori power analysis, that is, calculate the required number of participants to detect your effect (based on previous literature, not your actual results). This is because post-hoc power analyses rely on the effect size that you found, which is in itself biased by the sample size, making it somewhat circular. See this article for more details.
In recent years, the awareness of the academy around responsible research has notably increased. For instance, with advances in machine learning and artificial intelligence, recent efforts have been made to promote ethical, fair, and inclusive AI and robotics. However, the field of human-robot-interaction (HRI) is seemingly lagging behind these practices.To better understand if and to what extent HRI is incentivizing researchers to engage in responsible research, we conducted an exploratory review of the publishing guidelines for the most popular HRI conference venues. We identified 18 conferences which published at least 7 HRI papers in 2022. From these, we discuss four themes relevant to conducting responsible HRI research in line with the Responsible Research and Innovation framework: ethical and human participant considerations, transparency and reproducibility, accessibility and inclusion, and plagiarism and LLM use. We identify several gaps and room for improvement within HRI regarding responsible research. Finally, we establish a call to action to provoke novel conversations among HRI researchers about the importance of conducting responsible research within emerging fields like HRI.
Source title | Count papers | Acronym | E&HP | A&I | T&R | PL |
---|---|---|---|---|---|---|
ACM/IEEE International Conference on Human-Robot Interaction | 174 | HRI | ✓ | ✓ | ||
RO-MAN 2022 - 31st IEEE International Conference on Robot and Human Interactive Communication | 113 | ROMAN | ||||
IEEE International Conference on Intelligent Robots and Systems | 41 | IROS | ||||
Proceedings - IEEE International Conference on Robotics and Automation | 38 | ICRA | ||||
Conference on Human Factors in Computing Systems - Proceedings | 25 | CHI | ✓ | ✓ | ||
HAI 2022 - Proceedings of the 10th Conference on Human-Agent Interaction | 20 | HAI | ✓ | |||
IEEE-RAS International Conference on Humanoid Robots | 16 | Humanoid | ||||
2022 IEEE International Conference on Robotics and Biomimetics, ROBIO 2022 | 12 | ROBIO | ✓ | |||
2022 IEEE International Conference on Development and Learning, ICDL 2022 | 12 | ICDL | ✓ | |||
Proceedings of the IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics | 12 | BIOROB | ||||
Conference Proceedings - IEEE International Conference on Systems, Man and Cybernetics | 11 | SMC | ||||
2022 10th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos, ACIIW 2022 | 10 | ACII | ✓ | |||
Proceedings of the 2022 IEEE International Conference on Human-Machine Systems, ICHMS 2022 | 9 | ICHMS | ||||
UMAP2022 - Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization | 8 | UMAP | ✓ | ✓ | ||
Proceedings of Interaction Design and Children, IDC 2022 | 8 | IDC | ✓ | |||
ICARM 2022 - 2022 7th IEEE International Conference on Advanced Robotics and Mechatronics | 8 | ICARM | ✓ | |||
Proceedings of the Annual Hawaii International Conference on System Sciences | 7 | HICSS | ||||
IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM | 7 | AIM |
@article{spitale2024hri,
title={HRI Wasn’t Built In a Day: A Call To Action For Responsible HRI Research},
author={Micol Spitale and Rebecca Stower and Elmira Yadollahi and Maria Teresa Parreira and Iolanda Leite and Hatice Gunes},
year={2024},
booktitle={2024 33rd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)},
primaryClass={cs.HC}
}
Research reproducibility – i.e., rerunning analyses on original data to replicate the results – is paramount for guaranteeing scientific validity. However, reproducibility is often very challenging, especially in research fields where multi-disciplinary teams are involved, such as child-robot interaction (CRI). This paper presents a systematic review of the last three years (2020-2022) of research in CRI under the lens of reproducibility, by analysing the field for transparency in reporting. Across a total of 325 studies, we found deficiencies in reporting demographics (e.g. age of participants), study design and implementation (e.g. length of interactions), and open data (e.g. maintaining an active code repository). From this analysis, we distil a set of guidelines and provide a checklist to systematically report CRI studies to help and guide research to improve reproducibility in CRI and beyond.
@misc{spitale2023systematic,
title={A Systematic Review on Reproducibility in Child-Robot Interaction},
author={Micol Spitale and Rebecca Stower and Elmira Yadollahi and Maria Teresa Parreira and Nida Itrat Abbasi and Iolanda Leite and Hatice Gunes},
year={2023},
eprint={2309.01822},
archivePrefix={arXiv},
primaryClass={cs.HC}
}