Please use this identifier to cite or link to this item:
|Title:||Soft peer review. Social software and distributed scientific evaluation|
|Keywords:||peer review;rating;impact factor;citation analysis;usage factors;scholarly publishing;social bookmarking;collaborative annotation;online reference managers;social software;web 2.0;tagging;folksonomy|
|metadata.dc.relation.ispartof:||From CSCW to Web 2.0: European Developments in Collaborative Design Selected Papers from COOP08|
|Abstract:||The debate on the prospects of peer-review in the Internet age and the increasing criticism leveled against the dominant role of impact factor indicators are calling for new measurable criteria to assess scientific quality. Usage-based metrics offer a new avenue to scientific quality assessment but face the same risks as first generation search engines that used unreliable metrics (such as raw traffic data) to estimate content quality. In this article I analyze the contribution that social bookmarking systems can provide to the problem of usage-based metrics for scientific evaluation. I suggest that collaboratively aggregated metadata may help fill the gap between traditional citation-based criteria and raw usage factors. I submit that bottom-up, distributed evaluation models such as those afforded by social bookmarking will challenge more traditional quality assessment models in terms of coverage, efficiency and scalability. Services aggregating user-related quality indicators for online scientific content will come to occupy a key function in the scholarly communication system.|
|Appears in Collections:||COOP 2008: Proceedings of the 8th International Conference on Designing Cooperative Systems|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.