Information communities like Slashdot or Wikipedia have become very successfully in using peer production to collect voluntary contributions from their members. With the knowledge being managed by the respective community, as opposed to a central authority, two questions arise: How does the community decide with a minimum of effort which data items are correct and important? Which incentives encourage users to contribute, while not compromising the quality of the data? A promising solution is peer rating: Users review data objects created by others and rate them according to the perceived quality. To motivate users to contribute, remuneration will be contingent on extent and quality of their contributions. Community platforms like eBay or slashdot.org have demonstrated that reputation or privileges within the system may be effective as incentives. Which incentives are most appropriate for a virtual community are our open research questions.
We study the appropriateness of incentive mechanisms and rating protocols for virtual communities by means of simulation and user experiments. Therefore, we have developed a dedicated simulation framework to simulate the behavior of peer-based online communities. The results of the simulations will be validated with user experiments in real-world online communities. We will use the gained insights to develop a generic platform for the collaborative, incentive-based creation of knowledge structures. This platform will be particularly suited for the creation of structured knowledge like ontologies.