Effect Size Measure
Cramér's V Calculator
A significant p-value tells you an association exists. Cramér's V tells you how strong it is. Always report both.
Try it now — drop your data file
CSV or Excel. Your data stays in your browser.
Drop your spreadsheet here
CSV or Excel · up to 50 MB
What is Cramér's V?
Cramér's V is an effect size statistic for the chi-square test of independence. It ranges from 0 to 1: 0 means no association at all; 1 means perfect association. Unlike the chi-square statistic itself, Cramér's V is not affected by sample size, so it gives a meaningful measure of how practically important a relationship is.
Named after Harald Cramér, it is calculated by normalising the chi-square statistic by both the sample size and the smaller table dimension. This makes it comparable across tables of different sizes — a V of 0.3 means the same thing whether your table is 2×2 or 4×5.
Cramér's V is the most widely reported effect size for categorical associations in academic publishing, market research reports, and policy analysis. Journals including APA publications now require reporting effect sizes alongside p-values.
When to use it
- You've run a chi-square test and want to quantify the practical significance of a statistically significant result.
- You need to compare the strength of associations across multiple crosstabs.
- You are writing up results for publication and need an APA-compliant effect size.
- Your table has more than two rows or columns (Phi coefficient only applies to 2×2 tables).
Formula
Cramér's V
V = √( χ² / (n × (k − 1)) )
χ² = chi-square statistic from your contingency table
n = total number of observations
k = min(rows, columns) — the smaller of the two table dimensions
Range: 0 (no association) to 1 (perfect association)
Interpreting Cramér's V
Cohen (1988) proposed the following benchmarks for 2×2 tables. For larger tables, adjust thresholds downward — a V of 0.15 in a 4×4 table represents a medium effect.
0.00 – 0.10
Negligible
0.10 – 0.30
Small
0.30 – 0.50
Medium
0.50 – 1.00
Large
These are guidelines, not rules. Always interpret the effect size in the context of your domain. In clinical research a “small” effect can be highly consequential; in consumer surveys a “medium” effect may not be actionable.