Philosophy

We are surrounded by great quantities of data. At its core, statistics is a field that allows us to make sense of those data, to see the larger picture, and to determine which special cases need more investigation.


Statistics can be used to answer questions and, just as importantly, measure how good those answers are. Unfortunately, our brains are not usually naturally wired to think statistically. It is a field with lots of jargon and many subspecialties. Statistics can and have been used inappropriately to manipulate how data are interpreted. There’s a reason for the oft-used phrase “Lies, damn lies, and statistics.” But it doesn’t have to be that way. Statistics can be elegant, meaningful, and, if not intuitive, understood by lay people.

How I started this practice

(This description will make more sense if you also know that I cannot walk past a jigsaw puzzle without getting completely sucked into it for hours.)

The first EIS I read and commented on described the potential impacts of oil drilling in the Beaufort Sea associated with Lease Sale 193 in 2013. The comment period ended just before the winter holidays. I read as much as I could of the EIS and submitted a six page comment, trying to connect the inevitable risks back to the actual proposal and the mandates of the agencies. One of the risks that was of primary importance was the threat of oil spills. In the EIS it was determined that the probability of at least one large spill (with large defined as >1,000 barrels, which is the same as >42,000 gallons) was 75%.

After the new year, I looked through the other comments that had been submitted. Very few got into the nitty-gritty details of the project proposal, and those that did were mostly from environmental attorneys. I contacted one of them and began a correspondence which led me to doing a deep dive into how that 75% chance was arrived at. His take was that only the consultants contracted by the authors of the EIS really understood how the predicted number and probability of spills were estimated. Given my background in mathematical modeling and statistics, I dug in.

The Lease Sale 193 EIS (layer 1) cited an oil spill risk analysis (layer 2) that in turn cited a fault tree analysis for Arctic oil spill frequency (layer 3). The fault tree analysis concluded that the relative risk of drilling for oil in the Arctic was almost 50% lower than if the same infrastructure (miles of pipeline and number of platforms) were to be used in the Gulf of Mexico. I found it difficult to understand how it would be possible for Arctic oil production to be safer than oil production in the Gulf. The consultant who produced the fault tree estimate had prepared several previous versions of the work, so I examined earlier reports dating from 2002, 2006, 2008, and 2011 (let’s call these layers 3a-d). I was looking for the fundamental assumptions of the model and the origin of the data used. I eventually found the raw spill data (layer 4) that had been used in the analysis from the Bureau of Ocean Energy Management and many of the references cited in the fault tree estimates (layer 5).

I checked the origin of every number used in the modeling, the reasoning and references for all the assumptions, and redid the math. I found errors and inconsistencies at every step. All of those major flaws were invisible to readers of just layers 1 and 2 and had a profound impact on the how the risks of large oil spills were considered in projects proposing offshore oil drilling in the Arctic.

Peeling back the layers of the onion

I began reading EISs knowing that the documents themselves were only the starting point and learned that many - if not most - of the references cited in the EISs do not face peer review. Some of the reports produced by consultants and contractors are very good science. Some are not. It takes time, determination, and expertise to evaluate if the risks and impacts are fully and accurately represented. Under NEPA, the project proponents and lead agencies are required to take a hard look at potential environmental impacts so that concerned parties and decision makers can be fully informed. If the science in EIS layers 2 and deeper is substantially flawed or incomplete, that standard is not met.

Scientific integrity

Every EIS is different because every project is different. I come to each with an open mind, ready to learn and ask questions. When the logic or conclusions don’t make sense to me or I want to understand further how various quantitative values were found, I read deeper layers. If those layers answer my questions, I move on. If they don’t, I dig in further. In each case it is the quality of the data and science that drive my commentary and critique of the EIS.

As a scientist who has submitted several papers for peer review, I know the importance of constructive criticism and how it can improve my work and contribution to the literature. I approach critiquing EISs and the works cited within them in the same spirit. If major decisions that impact ecosystems and communities are being made based on the science in EISs, it had better stand up to scrutiny and be good.

Communication

None of this effort means anything if my clients don’t understand my conclusions or trust the process and logic that got me there. I know that reading highly technical commentaries on already specialized reports full of jargon is not something most people want to spend time doing. One of the most important and enjoyable aspects of digging into EISs is the interactions I have with clients and having them see and understand what I learn in my work. My goal is to break my analysis into fundamental pieces that are objectively justified and make sense. I have found that talking through the work is often essential, leading to deeper understanding, better focus, and a more efficient use of time for all parties.