Augmenting Multi-Party Face-to-Face Interactions Amongst Strangers with User Generated Content
We present the results of an investigation into the role of curated representations of self, which we term Digital Selfs, in augmented multi-party face-to-face interactions. Advancements in wearable technologies (such as Head-Mounted Displays) have renewed interest in augmenting face-to-face interaction with digital content. However, existing work focuses on algorithmic matching between users, based on data-mining shared interests from individuals’ social media accounts, which can cause information that might be inappropriate or irrelevant to be disclosed to others. An alternative approach is to allow users to manually curate the digital augmentation they wish to present to others, allowing users to present those aspects of self that are most important to them and avoid undesired disclosure. Through interviews, video analysis, questionnaires and device logging, of 23 participants in 6 multi-party gatherings where individuals were allowed to freely mix, we identified how users created Digital Selfs from media largely outside existing social media accounts, and how Digital Selfs presented through HMDs were employed in multi-party interactions, playing key roles in facilitating strangers to interact with each other. We present guidance for the design of future multi-party digital augmentations in collaborative scenarios.