A Comparison of Usage Evaluation and Inspection Methods for Assessing Groupware Usability

dc.contributor.authorSteves, Michelle Potts
dc.contributor.authorMorse, Emile
dc.contributor.authorGutwin, Carl
dc.contributor.authorGreenberg, Saul
dc.date.accessioned2023-06-08T11:43:19Z
dc.date.available2023-06-08T11:43:19Z
dc.date.issued2001
dc.description.abstractMany researchers believe that groupware can only be evaluated by studying real collaborators in their real contexts, a process that tends to be expensive and time-consuming. Others believe that it is more practical to evaluate groupware through usability inspection methods. Deciding between these two approaches is difficult, because it is unclear how they compare in a real evaluation situation. To address this problem, we carried out a dual evaluation of a groupware system, with one evaluation applying user-based techniques, and the other using inspection methods. We compared the results from the two evaluations and concluded that, while the two methods have their own strengths, weaknesses, and trade-offs, they are complementary. Because the two methods found overlapping problems, we expect that they can be used in tandem to good effect, e.g., applying the discount method prior to a field study, with the expectation that the system deployed in the more expensive field study has a better chance of doing well because some pertinent usability problems will have already been addressed.en
dc.identifier.doi10.1145/500286.500306
dc.identifier.urihttps://dl.eusset.eu/handle/20.500.12015/4774
dc.language.isoen
dc.publisherAssociation for Computing Machinery
dc.relation.ispartofProceedings of the 2001 ACM International Conference on Supporting Group Work
dc.subjectusage evaluation techniques
dc.subjectgroupware usability
dc.subjectinspection evaluation techniques
dc.subjectevaluation
dc.titleA Comparison of Usage Evaluation and Inspection Methods for Assessing Groupware Usabilityen
gi.citation.publisherPlaceNew York, NY, USA
gi.citation.startPage125–134
gi.citations.count33
gi.citations.elementCarmelo Ardito, Rosa Lanzilotti, Marcin Sikorski, Igor Garnik (2014): Can Evaluation Patterns Enable End Users to Evaluate the Quality of an e-learning System? An Exploratory Study, In: Lecture Notes in Computer Science, doi:10.1007/978-3-319-07440-5_18
gi.citations.elementC.T. Blake, L. Rapanotti (2000): Usability evaluation of distributed groupware in distance learning, In: Information Technology Based Proceedings of the FIfth International Conference onHigher Education and Training, 2004. ITHET 2004., doi:10.1109/ithet.2004.1358224
gi.citations.elementSteven R. Haynes, Sandeep Purao, Amie L. Skattebo, Steven R. Haynes (2009): Scenario-Based Methods for Evaluating Collaborative Systems, In: Computer Supported Cooperative Work (CSCW) 4(18), doi:10.1007/s10606-009-9095-x
gi.citations.element(2010): Bibliography, In: Interacting with Geospatial Technologies, doi:10.1002/9780470689813.biblio
gi.citations.elementPaolo Davoli, Matteo Monari, Kerstin Severinson Eklundh (2008): Peer activities on Web-learning platforms—Impact on collaborative writing and usability issues, In: Education and Information Technologies 3(14), doi:10.1007/s10639-008-9080-x
gi.citations.elementGregorio Convertino, Dennis C. Neale, Laurian Hobby, John M. Carroll, Mary Beth Rosson (2004): A laboratory method for studying activity awareness, In: Proceedings of the third Nordic conference on Human-computer interaction, doi:10.1145/1028014.1028063
gi.citations.elementJacques Wainer, Claudia Barsottini (2007): Empirical research in CSCW — a review of the ACM/CSCW conferences from 1998 to 2004, In: Journal of the Brazilian Computer Society 3(13), doi:10.1007/bf03192543
gi.citations.elementPedro Antunes, Marcos R. S. Borges, Jose A. Pino, Luis Carriço (2006): Analytic Evaluation of Groupware Design, In: Lecture Notes in Computer Science, doi:10.1007/11686699_4
gi.citations.elementKevin Baker, Saul Greenberg, Carl Gutwin (2002): Empirical development of a heuristic evaluation methodology for shared workspace groupware, In: Proceedings of the 2002 ACM conference on Computer supported cooperative work, doi:10.1145/587078.587093
gi.citations.elementMatthias Krauß, Kai Riege, Marcus Winter, Lyn Pemberton (2009): Remote Hands-On Experience: Distributed Collaboration with Augmented Reality, In: Lecture Notes in Computer Science, doi:10.1007/978-3-642-04636-0_22
gi.citations.elementDalma Geszten, Balázs Péter Hámornik, Károly Hercegfi (2024): Team usability testing: development and validation of a groupware usability evaluation method, In: Cognition, Technology & Work 3(26), doi:10.1007/s10111-024-00759-5
gi.citations.elementLu Liang, Yong Tang, Na Tang (2006): Determinants of Groupware Usability for Community Care Collaboration, In: Lecture Notes in Computer Science, doi:10.1007/11610113_45
gi.citations.elementAnders Bruun, Peter Gull, Lene Hofmeister, Jan Stage (2009): Let your users do the testing, In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, doi:10.1145/1518701.1518948
gi.citations.elementRahat Iqbal, Janienke Sturm, Olga Kulyk, Jimmy Wang, Jacques Terken (2005): User-centred design and evaluation of ubiquitous services, In: Proceedings of the 23rd annual international conference on Design of communication: documenting & designing for pervasive information, doi:10.1145/1085313.1085346
gi.citations.elementDavid Pinelle, Carl Gutwin (2007): Evaluating teamwork support in tabletop groupware applications using collaboration usability analysis, In: Personal and Ubiquitous Computing 3(12), doi:10.1007/s00779-007-0145-4
gi.citations.elementKristin Dew, Anne M. Turner, Loma Desai, Nathalie Martin, Katrin Kirchhoff (2015): Evaluating Groupware Prototypes with Discount Methods, In: Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing, doi:10.1145/2685553.2699002
gi.citations.elementMinseok Lee, Jungpil Hahn, Gavriel Salvendy (2010): Developing and validating a methodology for discount usability evaluation of collaboration technologies, In: Theoretical Issues in Ergonomics Science 3(11), doi:10.1080/14639220802613434
gi.citations.elementJames Tam, Saul Greenberg (2004): A Framework for Asynchronous Change Awareness in Collaboratively-Constructed Documents, In: Lecture Notes in Computer Science, doi:10.1007/978-3-540-30112-7_7
gi.citations.elementNatalia Sales Santos, Lidia Silva Ferreira, Raquel Oliveira Prates (2012): An Overview of Evaluation Methods for Collaborative Systems, In: 2012 Brazilian Symposium on Collaborative Systems, doi:10.1109/sbsc.2012.29
gi.citations.elementClaudio Sapateiro, Nelson Baloian, Pedro Antunes, Gustavo Zurita (2009): Developing collaborative peer-to-peer applications on mobile devices, In: 2009 13th International Conference on Computer Supported Cooperative Work in Design, doi:10.1109/cscwd.2009.4968091
gi.citations.elementPedro Antunes, Valeria Herskovic, Sergio F. Ochoa, Jose A. Pino (2008): Structuring dimensions for collaborative systems evaluation, In: ACM Computing Surveys 2(44), doi:10.1145/2089125.2089128
gi.citations.elementPedro Antunes, Claudio Sapateiro, Gustavo Zurita, Nelson Baloian (2010): Development of a Mobile Situation Awareness Tool Supporting Disaster Recovery of Business Operations, In: Annals of Information Systems, doi:10.1007/978-1-4419-7406-8_17
gi.citations.elementDavid Pinelle, Carl Gutwin (2002): Groupware walkthrough, In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, doi:10.1145/503376.503458
gi.citations.elementPurvi Saraiya, Chris North, Karen Duca (2010): Comparing benchmark task and insight evaluation methods on timeseries graph visualizations, In: Proceedings of the 3rd BELIV'10 Workshop: BEyond time and errors: novel evaLuation methods for Information Visualization, doi:10.1145/2110192.2110201
gi.citations.elementM.P. Steves, E. Morse (2000): Looking at the whole picture: a case study of analyzing a virtual workplace, In: Proceedings Tenth IEEE International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises. WET ICE 2001, doi:10.1109/enabl.2001.953396
gi.citations.elementSoraia Reis, Raquel Prates (2012): An initial analysis of communicability evaluation methods through a case study, In: CHI '12 Extended Abstracts on Human Factors in Computing Systems, doi:10.1145/2212776.2223845
gi.citations.elementSteven R. Haynes, Sandeep Purao, Amie L. Skattebo (2004): Situating evaluation in scenarios of use, In: Proceedings of the 2004 ACM conference on Computer supported cooperative work, doi:10.1145/1031607.1031624
gi.citations.elementZheng (Eric) Chang, Songnian Li (2012): Geo‐Social Model: A Conceptual Framework for Real‐time Geocollaboration, In: Transactions in GIS 2(17), doi:10.1111/j.1467-9671.2012.01352.x
gi.citations.elementRaquel Oliveira Prates, Alberto Barbosa Raposo (2006): Desafios para testes de usuários em sistemas colaborativos - lições de um estudo de caso, In: Proceedings of VII Brazilian symposium on Human factors in computing systems, doi:10.1145/1298023.1298051
gi.citations.elementCláudio Miguel Sapateiro, Sérgio Grosso (2000): Capturing Distributed Contributions to an Informal Work Process, In: Handbook of Research on Developments in E-Health and Telemedicine, doi:10.4018/978-1-61520-670-4.ch054
gi.citations.elementChris North, Purvi Saraiya, Karen Duca (2011): A comparison of benchmark task and insight evaluation methods for information visualization, In: Information Visualization 3(10), doi:10.1177/1473871611415989
gi.citations.elementR. Lanzilotti, C. Ardito, M.F. Costabile, A. De Angeli (2011): Do patterns help novice evaluators? A comparative study, In: International Journal of Human-Computer Studies 1-2(69), doi:10.1016/j.ijhcs.2010.07.005
gi.citations.elementDalma Geszten, Balázs Péter Hámornik, Károly Hercegfi (2020): Empirical study of Team Usability Testing: a laboratory experiment, In: Cognition, Technology & Work 4(23), doi:10.1007/s10111-020-00647-8
gi.conference.locationBoulder, Colorado, USA

Files

Collections