Community Wishlist Survey/Prioritization
This article is written for the volunteers, enthusiasts of Community Wishlist Survey and advanced contributors. We, Community Tech, want to describe how we plan our work on proposals after the voting phase ends. We hope to explain our processes of software development. We welcome feedback about the clarity of this document.
As a result of each edition of the Community Wishlist Survey, there is a new list of proposals sorted by the number of votes. Over the years, we learned that committing to working on the top 10 is not the best idea.
Instead, we have developed a method to prioritize the proposals. We assess them systematically and transparently. The prioritization helps us decide how to work so that we may complete as many proposals as possible. There are a few assumptions:
- Popularity of a proposal should be a very important factor in our selection decision, but not the only one.
- It is best to work on proposals in a strategic order and complete as many as possible.
- Engineers and designers should be able to work with each other without blocking one another. For example, as the designer researches the proposal and generates visual components for proposals, the engineers focus on proposals that are purely technical.
- It is best to communicate transparently with the communities rather than hiding the details. Visibility builds trust and dialogue.
Summary of criteria
[edit]When prioritizing, we review the 30 most popular proposals. We do not review any proposals below that, because we are not be able to grant more than 30 wishes per year. We score the proposals based on popularity, technical and product & design complexity, as well as community impact. The following summarizes the criteria:
Once every proposal is scored, we rank them and work according to this ranking. Only then we can:
- Work on the most possible number of wishes with the resources we have.
- Choose to make the biggest impact while taking maintenance and complexity into account.
We also consult with other teams at the Foundation, and investigate if they were already working on projects related to proposals.
Technical Complexity
[edit]Criteria
[edit]Our engineers estimate how much effort they would need to put into granting a wish. They prioritize less complex (more workable) projects. Whenever something is not clear, they try to overestimate rather than underestimate.
- Technical dependency – we check if the work requires interactions with other Wikimedia Foundation teams. It could be that part of the work needs to be on other teams' roadmap or that we need other teams' input or feedback before we can complete the wish. Examples of these are schema changes, security reviews, adding a new extension, and upgrading third-party libraries.
- Technical research – we ask ourselves if we know how to approach a particular problem. Sometimes we need to evaluate and consider our options before we can start thinking about a solution. Sometimes we need to confirm that what needs to be done can be done or is within what the platform we are working on can handle.
- Technical effort – we ask ourselves how familiar we are with the underlying code and how big or complex the task can be. A high-effort score could also mean that the code we'll be working with is old, brittle, or has some degree of technical debt that will have to be dealt with before we can start working on our actual task.
Scale
[edit]Each of these is ranked on a 1-6 scale:
1 - Lowest Complexity |
|
---|---|
2 - Low Medium Complexity |
|
3 - Medium Complexity |
|
4 - Medium Large Complexity |
|
5 - Large Complexity |
|
6 - Extra large Complexity |
|
Product and Design Complexity
[edit]Criteria
[edit]Similarly to the assessments above, our designer estimates what effort should be made to complete a project. They prioritize less complex (more workable) projects. Whenever something is not clear, they tries to overestimate rather than underestimate.
- Design research effort – we seek to understand the level of research needed for each proposal. In this case, the research involves understanding the problem, either at the very beginning through initial discovery work (the scope and details of the project, surveys or interviews with community members), or later in the process through community discussions and usability testing (e.g. how do users contribute with and without this new feature).
- Visual design effort – a significant number of proposals require changes in the user interface of Wikimedia projects. Therefore, we check to estimate the change of the user interface, how many elements need to be designed and their complexity. For instance, using existing components from our design system or creating new ones, keeping in mind how many states or warnings need to be conceived to help guide users, including newcomers.
- Workflow complexity – we ask ourselves how does this particular problem interfere with the current workflows or steps in the user experience of editors. For example, a high score here would mean that there are a lot of different scenarios or places in the user interface where contributors might interact with a new feature. It can also mean that we might have to design for different user groups, advanced and newcomers alike.
Scale
[edit]Each of these is ranked on a 1-6 scale:
1 - Lowest Complexity |
|
---|---|
2 - Low Medium Complexity |
|
3 - Medium Complexity |
|
4 - Medium Large Complexity |
|
5 - Large Complexity |
|
6 - Extra Large Complexity |
|
Community Impact
[edit]In contrast to the two perspectives described above, this part is about equity. Practically, it's about ensuring that the majorities aren't the only ones whose needs we work on.
Depending on this score, proposals with similar numbers of votes and similar degrees of complexity are more or less likely to be prioritized. If a given criterion is met, the proposal gets +1. The more intersections, the higher the score. This assessment was added by our Community Relations Specialist.
- Not only for Wikipedia – proposals related to various projects and project-neutral proposals, are ranked higher than projects dedicated only to Wikipedia. Autosave edited or new unpublished article is an example of a prioritized proposal.
- Sister projects and smaller wikis – we additionally prioritize proposals about the undersupported projects (like Wikisource or Wiktionary). We counted Wikimedia Commons as one of these. Tool that reviews new uploads for potential copyright violations is an example of a prioritized proposal.
- Critical supporting groups – we prioritize proposals dedicated to stewards, CheckUsers, admins, and similar groups serving and technically supporting the broader community. Show recent block history for IPs and ranges is an example of a prioritized proposal.
- Reading experience – we prioritize proposals improving the experience of the largest group of users – the readers. Select preview image is an example of a prioritized proposal.
- Non-textual content and structured data – we prioritize proposals related to multimedia, graphs, etc. Mass uploader is an example of a prioritized proposal.
- Urgency – we prioritize perennial bugs, recurring proposals, and changes which would make contributing significantly smoother. Fix search and replace in the Page namespace editor is an example of a prioritized proposal.
- Barrier for entry – we prioritize proposals about communication and those which would help to make the first contributions. Show editnotices on mobile is an example of a prioritized proposal.
2022 Results ranked by Prioritization Score
[edit]These scores may change when we start working on the proposals. As we explained above, we have tried to overestimate rather than underestimate. Check out the proposals, in order of prioritization:
In addition, if you are interested in viewing a more granular version of the sub-components that make the prioritization scores, we've made the individual sub-components public:
These are proposals which we found will be worked on by other teams at the WMF or third-party open source when we went through the process of estimating their complexities:
Wish | Popularity Rank |
---|---|
Deal with Google Chrome User-Agent deprecation | 15 |
Show editnotices on mobile | 15 |
Categories in mobile app | 18 |
Mass uploader | 28 |
Helpful Terminology
[edit]Unmoderated user research
Using a tool like UserTesting.com to run “mocks” of our proposed design changes and see if we are designing the right wish solution-- it’s called “unmoderated” because we let users click around and see our designs makes sense without having to explain it to them
Quantitative data collection
The process of collecting data to understand how users are interacting with the current UI to understand the wish’s pain points -- be it data regarding clicks, visits, downloads, sessions etc. Data is often limited when we first tackle a wish due to lack of tracking it prior to wish, or nonexistent data due to privacy reasons
Qualitative data collection
Understanding the wish’s problem space by talking directly to users, be it interviews or via a survey at the beginning of the wish to understand the pain points and clarify how to tackle a solution
“Sourcing” users
The process of finding users who have the knowledge required to participate in our user tests and give us the information we need to understand if our design and product decisions are headed in the right direction. Some wishes are for advanced users, which are hard to source and not available in tools like UserTesting.com
Code refactoring
The process of making the existing code more maintainable so that other people may contribute to the code, as well as removing technical debt and bugs.
Database schema changes
The alteration to a collection of logical structures of all or part of a relational database. When a change to an existing database is needed, it must be designed and then approved by a team external to CommTech. This usually takes more time and adds structural complexity to the project.
Third party code
Code written outside of the Community Tech team, examples include APIs or libraries.