I am interested in comparing the performance of two different techniques (i.e., Method A vs. Method B) on a continuous outcome of interest. The performance of such techniques is measured through test scores of the participants (e.g., the Participant 1 passed 30 questions out of 50 with Method A and 32 with Method B, the Participant 2 passed 22 questions out of 50 with Method A, and 20 with Method B etc.). Most studies report the mean number of questions passed with each technique. Rarely the standard deviation. Different studies use a different number of questions or methods for measuring the outcome.
I managed to back-compute the standard deviations from studies' box-plots so I could run a meta-analysis with Cohen's d.
I also thought about running a meta-analysis with response ratios (i.e., mean(A)/mean(B)) to check the robustness of results to the effect size used. However, Borenstein et al. say: "The response ratio is not meaningful for studies (such as most social science studies) that measure outcomes such as test scores, attitude measures, or judgments, since these have no natural scale units and no natural zero points".
I do not understand why. To me the ratio between the means makes complete sense (i.e., Method A outperforms Method B to a 200%, if Method B's mean is twice as high as that of Method A and viceversa, regardless of the scales or number of questions used in the studies).
What is wrong with response ratios in this case?
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2011). Introduction to meta-analysis. John Wiley & Sons.