The charts below plots the estimated average treatment effect of BWCs (as a yearly rate, per 1,000 officers) on the following outcomes: documented use of force, civilian complaints, officer discretion (as measured by arrests for disorderly conduct), whether a case was prosecuted, and case disposition. We calculate these estimates by comparing the average rate of the outcome (e.g., use of force) in the treatment group with the average rate of the outcome in the control group. (All estimates are reported in the Supplementary Materials.)

If BWCs have no impact, then the outcomes between the two groups of officers will be about the same. If, on the other hand, we see statistically significant differences between the two groups, then we can infer that the BWCs caused that difference.

What makes a difference “significant” is a statistical question. Some differences are due to chance, just noise caused by something other than the program we implemented—in this case, BWCs. The randomized design and associated statistics give us a rigorous way to estimate the average effect and to describe how *uncertain* we are about it.

For each outcome, we report our best estimate of the effects of BWCs as well as the margin of error (or 95% confidence interval), shown by the bars extending away from the estimate. This interval expresses our uncertainty (how much noise is there?). Roughly speaking, when a confidence interval spans negative, zero, and positive values, scientists sometimes speak of the result being “statistically insignificant,” or “null.” **More plainly, we interpret this to mean that BWCs had no detectable effect on the outcome in question.**

Select a chart below to explore the estimated average effects of BWCs on the measured outcomes, as well as the uncertainty of those estimates.