Constructing Awareness Through Speech, Gesture, Gaze and Movement During a Time-Critical Medical Task

dc.contributor.authorZhang, Zhan
dc.contributor.authorSarcevic, Aleksandra
dc.date.accessioned2017-10-23T11:55:30Z
dc.date.available2017-10-23T11:55:30Z
dc.date.issued2015
dc.description.abstractWe conducted a video-based study to examine how medical teams construct and maintain awareness of what is going on in the environment during a time-critical, collaborative task—endotracheal intubation. Drawing on a theme that characterizes work practices in collaborative work settings—reading a scene—we examine both vocal and non-vocal actions (e.g., speech, body movement, gesture, gaze) of team members participating in this task to understand how these actions are used to display status of one’s work or to acquire information about the work status of others. While each action modality was helpful in constructing awareness to some extent, it posed different challenges, requiring team members to combine both vocal and non-vocal actions to achieve awareness about each other’s activities and their temporal order. We conclude by discussing different types of non-vocal actions, their purpose, and the need for computational support in this dynamic work setting.en
dc.identifier.doi10.1007/978-3-319-20499-4_9
dc.identifier.isbn978-3-319-20498-7
dc.language.isoen
dc.publisherSpringer, Cham
dc.relation.ispartofECSCW 2015: Proceedings of the 14th European Conference on Computer Supported Cooperative Work
dc.relation.ispartofseriesECSCW
dc.titleConstructing Awareness Through Speech, Gesture, Gaze and Movement During a Time-Critical Medical Tasken
dc.typeText/Conference Paper
gi.citation.endPage182
gi.citation.startPage163
gi.citations.count9
gi.citations.elementZhan Zhang, Aleksandra Sarcevic, Claus Bossen (2017): Constructing Common Information Spaces across Distributed Emergency Medical Teams, In: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, doi:10.1145/2998181.2998328
gi.citations.elementSwathi Jagannath, Aleksandra Sarcevic, Ivan Marsic (2018): An Analysis of Speech as a Modality for Activity Recognition during Complex Medical Teamwork, In: Proceedings of the 12th EAI International Conference on Pervasive Computing Technologies for Healthcare, doi:10.1145/3240925.3240941
gi.citations.elementSwathi Jagannath, Aleksandra Sarcevic, Neha Kamireddi, Ivan Marsic (2019): Assessing the Feasibility of Speech-Based Activity Recognition in Dynamic Medical Settings, In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, doi:10.1145/3290607.3312983
gi.citations.elementGloria Milena Fernandez-Nieto, Roberto Martinez-Maldonado, Kirsty Kitto, Simon Buckingham Shum (2021): Modelling Spatial Behaviours in Clinical Team Simulations using Epistemic Network Analysis: Methodology and Teacher Evaluation, In: LAK21: 11th International Learning Analytics and Knowledge Conference, doi:10.1145/3448139.3448176
gi.citations.elementGloria Fernandez-Nieto, Roberto Martinez-Maldonado, Vanessa Echeverria, Kirsty Kitto, Pengcheng An, Simon Buckingham Shum (2021): What Can Analytics for Teamwork Proxemics Reveal About Positioning Dynamics In Clinical Simulations?, In: Proceedings of the ACM on Human-Computer Interaction CSCW1(5), doi:10.1145/3449284
gi.citations.elementGloria Fernandez-Nieto, Pengcheng An, Jian Zhao, Simon Buckingham Shum, Roberto Martinez-Maldonado (2022): Classroom Dandelions: Visualising Participant Position, Trajectory and Body Orientation Augments Teachers’ Sensemaking, In: CHI Conference on Human Factors in Computing Systems, doi:10.1145/3491102.3517736
gi.citations.elementSimon Buckingham Shum, Vanessa Echeverria, Roberto Martinez-Maldonado (2019): The Multimodal Matrix as a Quantitative Ethnography Methodology, In: Communications in Computer and Information Science, doi:10.1007/978-3-030-33232-7_3
gi.citations.elementSapir Gershov, Daniel Braunold, Robert Spektor, Alexander Ioscovich, Aeyal Raz, Shlomi Laufer (2023): Automating medical simulations, In: Journal of Biomedical Informatics, doi:10.1016/j.jbi.2023.104446
gi.citations.elementSwathi Jagannath, Neha Kamireddi, Katherine Ann Zellner, Randall S. Burd, Ivan Marsic, Aleksandra Sarcevic (2022): A Speech-Based Model for Tracking the Progression of Activities in Extreme Action Teamwork, In: Proceedings of the ACM on Human-Computer Interaction CSCW1(6), doi:10.1145/3512920
gi.conference.date19-23 September 2015
gi.conference.locationOslo, Norway
gi.conference.sessiontitleFull Papers

Files

Original bundle

1 - 1 of 1
Loading...
Thumbnail Image
Name:
12 ZhangSarcevic2015.pdf
Size:
784.23 KB
Format:
Adobe Portable Document Format