About The Approach

Chief Newsham and MPD staff worked with The Lab @ DC to design and implement a randomized controlled trial (RCT). With this design, individual officers were randomly assigned—imagine flipping a coin—to either wear a body camera (treatment) or not (control). We then compared these two groups. Over 2,200 officers participated, making this one of the largest, most rigorous studies to date.

What is an RCT?

A Randomized Controlled Trial (RCT) is a type of scientific experiment designed to measure how much a policy or program causes a change, on average. Participants are randomly assigned to a treatment group, in which they receive the program (e.g., officers assigned to wear BWCs), or to a control group, in which they do not receive the program (e.g., officers assigned to not wear BWCs). This random assignment process involves a computer program but occurs in a manner similar to flipping a coin: heads, you get assigned to treatment; tails, you get assigned to control. This process leaves us with two groups—treatment and control—that look the same on average. The groups would be expected to have the same proportions of males and females, the same distribution of ages or years of service, and so forth.

Because the two groups are the same on average, except for the one thing we control to be different—namely, whether or not an officer wears a camera—we can infer, as rigorously as a scientist can infer, that any observed differences between the two groups are caused by the BWCs.

Contrast this approach with other designs you might imagine. If we simply measured outcomes over time, we would be unable to know whether any observed differences were caused by the BWCs or instead caused by other factors changing over time—such as changing demographics in the District, new laws, personnel changes, etc. Or imagine if officers volunteered to wear BWCs. In this case any observed differences may simply reflect differences between the type of officer who volunteers and the type who does not. With an RCT, in contrast, such factors happen in both the treatment and control groups, cancelling each other out on average.

How did we implement the RCT?

The study encompassed the entire department and included geographic coverage of the entire city. MPD is one of the largest police departments in the country, with over 3,800 sworn members serving a resident population of over 680,000. The department is organized into seven police districts covering 68 square miles, and is unique in its role as the local, state, and federal law enforcement authority in Washington, DC. We identified eligible participants within each of the seven police districts (as well as several specialized units) based on the following criteria: the member was on active, full duty administrative status and expected to remain so during the study period; held a rank of sergeant or below, and was assigned to patrol duties in a patrol district or to a non-administrative role at a police station. Eligible officers within each district or special unit were then randomly assigned to one of two groups: (1) no BWC (“control”) or (2) with BWC (“treatment”).

The study began June 28, 2015, when cameras were randomly assigned to officers in two of MPD's seven patrol districts, and concluded on December 15, 2016, at which point BWCs were distributed to all control group officers (in accordance with a legislative mandate to equip all MPD officers with BWCs by the end of 2016). With a phased deployment process, the last patrol district received cameras in May 2016, giving it a treatment period of seven months. To accommodate this staggered process, we measure outcomes over the first seven months of BWC deployment in each district. We tracked a wide range of outcomes associated with police activity that occurred during the treatment period until March 31, 2017.

We used administrative data to measure effects. The primary outcomes of interest were documented uses of force and civilian complaints, although we also measure a variety of additional policing activities and judicial outcomes.

Our examination of judicial outcomes was constrained by limitations in the available data. Namely, we did not have access to the full datasets managed by the United States Attorney’s Office (USAO), the Office of the Attorney General (OAG), and the courts. We instead had access to a subset of this data available to MPD, which captures only the initial charges on which an individual was arrested. A consequence is that we are unable to track court outcomes for any changes to those initial charges. For example, if MPD makes an arrest for a felony, and USAO changes those charges to a misdemeanor, then this event is only reflected in our data as a felony not prosecuted. The misdemeanor charge is not captured in our data. As this limitation applies to both control and treatment groups, we can still conduct a preliminary analysis on the evidentiary value of BWCs.

To analyze the data, we compare the average rate of the outcome in the treatment group to the average rate of the outcome in the control group, or a difference-in-means. Results were obtained at the officer level and translated into yearly rates per 1,000 officers.

We also use another approach: a linear regression model that increases precision by controlling for pre-treatment characteristics (e.g., officer age, sex, race, district assignment, use of force prior to the deployment of BWCs). We find the same results using both methods. To simplify discussion, we present only the difference-in-means estimates in the Results section. However, all estimates are reported in the Supplementary Materials, available for download here.

All of our analyses were conducted by two independent statistical teams, to help avoid coding errors and as a check of convergence in results.