Wikidata has a large amount of data. Its quality is important. In order to keep it that way we need to build two feedback loops:
- data from Wikidata is used somewhere and users there find issues. They should be able to report errors to us easily.
- data from somewhere else is imported into Wikidata or compared with data in Wikidata and we find issues in the source data. We need ways to feed back these errors to their source.
Research questions:
- What does a good workflow look like?
- What do Wikidata editors and data consumers want?
- How does this all play together with existing tools like the Wikidata Quality extension's check against 3rd party databases and the Primary Sources Tool?