Independent Audit of the ShotSpotter Accuracy

Executive Summary

According to a report from the Brookings Institution, 88 percent of gunshot incidents go unreported to police.[1] The ShotSpotter system is an acoustic gunshot detection service that detects, locates, and alerts police to gunfire, including those incidents that otherwise would have gone unreported. ShotSpotter enables law enforcement agencies to provide a precise and rapid response to detected incidents. The system uses wireless sensors throughout a coverage area to capture loud, impulsive sounds that may be gunfire. The data are transmitted to a central cloud service that classifies each incident with a gunfire probability percentage along with a location determined by triangulation enabled by sensors. ShotSpotter employees, located across two ShotSpotter Incident Review Centers, listen to the pulses from the sensors that detected the incident audio with playback tools, analyze the visual waveforms to see if they match the typical pattern of gunfire, and either publish the incident as gunfire or dismiss it as non-gunfire. The entire process is intended to take less than 60 seconds from the time of the gunfire to the time law enforcement is alerted to allow for a timely law enforcement response.

ShotSpotter claims that its system is 97% accurate and has a false positive rate—the rate at which gunfire is detected when none occurred—of 0.5%. To determine the accuracy rate for its system, ShotSpotter analyzes information from clients on possible errors, determines whether an error occurred, and catalogs any errors found. ShotSpotter commissioned Edgeworth Analytics to conduct an independent audit of the data and analyses that it uses to support its claims. Our audit has yielded 4 important insights:

  • ShotSpotter published 146,804 and 233,966 gunfire alerts to clients in 2019 and 2020, respectively.[2] For these years across all clients, our audit confirmed that based on client reports ShotSpotter correctly detected, classified, and published gunfire with 97.59% accuracy.
  • Across 2019 and 2020, the ShotSpotter system published alerts of gunfire when clients subsequently indicated that none occurred 0.41% of the time.
  • Despite substantial variation in the intensity of reporting of potential errors across clients, ShotSpotter’s accuracy rate does not appear to be sensitive to differences in clients’ propensity to report potential errors.
  • No single client exerts a disproportionate effect on ShotSpotter’s overall error reporting rate such that the accuracy rate would change significantly.

This report discusses Edgeworth Analytics’ approach to auditing ShotSpotter’s data and analysis and our additional testing, intended to ensure the validity of our results.

ShotSpotter Data Sources

Edgeworth Analytics obtained data from ShotSpotter for 2019 and 2020. We discussed the data available and ShotSpotter’s error tracking and reporting process with ShotSpotter personnel. Based on our discussions with ShotSpotter personnel, we requested the following data:

  • The number of published incidents sent to clients, by location;
  • Potential errors identified by clients for investigation and ShotSpotter’s conclusions regarding those potential errors; and
  • Several samples of “Monthly Scorecards,” which are documents sent to clients summarizing the activity detected and the error rates.

ShotSpotter data on published incidents are tracked in ShotSpotter’s own systems. However, information on potential errors relies on clients reporting those potential errors to ShotSpotter. When an error report comes in from a client, ShotSpotter creates a ticket and the incident is reviewed. The conclusion of the review may result in one of several outcomes:

  • A gunfire incident did not occur, but ShotSpotter published an alert for one—this is referred to as a “false positive”;
  • A gunfire incident occurred and ShotSpotter detected it, but an alert was not published for gunfire—this is referred to as a “false negative”;
  • A gunfire incident occurred and was not detected by ShotSpotter—this is referred to as a “missed” incident;
  • ShotSpotter failed to accurately identify the location of the gunfire to within 25 meters of the actual location—this is referred to as a “mislocated” incident; or
  • The error report was incorrect, or the incident was one that ShotSpotter is not intended to detect, such as gunfire outside the coverage area, indoors, or of a small caliber weapon (e., less than 25mm).

We used these data to conduct our audit.

Edgeworth Analytics Audit Results and Robustness Checks

First, Edgeworth conducted an analysis to ensure that the data were complete and accurate. Specifically, we compared the published incidents and errors detected in the Scorecards to those in the underlying data we received. Our analysis confirmed that the data appeared to be complete and accurate.

Once the data were validated, we reviewed the data and consolidated it into a format suitable for our analysis. This involved combining reporting of events across data sources and reviewing data fields and the possible outcomes of error reports. Using these data, we independently calculated the accuracy across the categories ShotSpotter uses for its reporting. Our analysis confirmed that the accuracy rate across all ShotSpotter clients for 2019 and 2020 was 97.42% and 97.70%, respectively. Having audited and validated ShotSpotter’s claims, we conducted additional analyses to confirm that these results are robust.

Since accuracy reporting depends on clients informing ShotSpotter of potential errors, we tested whether differences in the intensity of reporting may have unduly influenced the reported accuracy. For example, if a client with a relatively high volume of published gunshot incidents rarely reports potential errors, then the reported accuracy rate may be higher than the actual rate. To test for this issue, we identified the areas where the intensity of reporting potential errors was at or below the 5th and 10th percentile of client reporting intensity. As shown in Table 1 below, if these clients are removed from the data entirely—an extreme test—then the overall accuracy would decrease by less than 1%. Alternatively, assuming these clients with low reporting intensity all had the reporting intensity of the 5th or 10th percentile client and that all additional reports were erroneous ShotSpotter alerts, the overall accuracy rate would again decrease by less than 1%.[3] These accuracy rates are not statistically significantly different from the overall accuracy rate for all ShotSpotter clients.

Figure 1
Shotspotter Accuracy Rates
By Exclusion Threshold
2019 and 2020

Note:  Excluded accounts include new, pilot program, and service terminated clients as well as clients from which feedback was not expected.

Source: ShotSpotter.

Citations

[1] https://www.brookings.edu/research/the-geography-incidence-and-underreporting-of-gun-violence-new-evidence-using-shotspotter-data/

[2] A small number of ShotSpotter accounts—six in 2019 and 12 in 2020—are for clients for which feedback was not expected. These included new clients, pilot programs, and clients who terminated their service, as well as some low volume clients. Excluding these accounts, there are 144,739 alerts in 2019 and 229,359 alerts in 2020 with an accuracy rate of 97.56% on average across the years.

[3] This analysis is conservative as it is only conducted on the more restrictive set of clients excluding those not providing or expected to provide feedback.

This website uses cookies to improve functionality and performance. By continuing to use this website, you agree to the use of cookies in accordance with our Privacy Policy.