The following is a guest post co-authored by one of our cofounders and Benjamin Sovacool. It is a response to a criticism of an article whose results we have cited and discussed numerous times. Although different than our normal analyses, this post addresses a lot of important issues in energy, energy modelling, and analysis methodologies.

By Alex Gilbert (1) and Benjamin K. Sovacool (2)

(1) Spark Library, Washington DC, USA and (2) University of Sussex, United Kingdom


Earlier this year, we published a paper in the peer-reviewed journal Energy where we analyzed more than ten years of renewable energy projections from the Annual Energy Outlook (AEO), a major publication put out by the United States Energy Information Administration (EIA):

Alexander Q. Gilbert and Benjamin K. Sovacool, Looking the wrong way: Bias, renewable electricity, and energy modelling in the United States. Energy 94 (2016).

We found that these projections, which are widely relied upon by policymakers and industry, systematically underestimated wind and solar growth.  We found that contributing factors notably include policy uncertainty, unjustified assumptions, and modelling issues. Based on this, we concluded that unless these systematic errors are addressed, EIA’s projections will continue to distort perceptions of future renewable growth.  We further suggested that such distortion undermines the AEO’s usefulness in informing policy.

In the hope of developing a scholarly dialogue with EIA regarding the development and use of AEO models, we sent our study to EIA in January 2016. In March 2016, EIA published their own review of their renewable energy projections, arguing that:

  • “EIA’s projections offer a sound foundation for analyzing the impact of policy proposals” on energy markets
  • “Recent criticisms of EIA projections for wind and solar markets have included exaggerated, misleading, or simply false conclusions regarding the accuracy and usefulness of EIA’s results”
  • And that “careful review shows EIA projections to be reliable indicators of overall energy market conditions.”

Despite being aware of our study and being in dialogue with us, EIA’s review did not address our study’s findings or results in their review.

In correspondence with us, they also argued, somewhat offensively, that “we feel that you are in the minority in your misunderstanding of the NEMS model, and we have nothing further to gain by continuing to engage with you.”  Rather than continue to engage with us directly, David Daniels and Chris Namovicz from the EIA published a separate critique of our paper in April 2016, arguing that our conclusions were based on a misunderstanding of the NEMS model and that several statements were “in error, misleading, or otherwise unfit for publication in a scholarly journal.”

As we present below, the main points of EIA’s critique are not matters of fact but of interpretation.  EIA’s critique also, worryingly, raises concerns about the broader responsiveness of the energy modelling community to meaningfully engage with well-intentioned, independent critics.

Normally such challenges to the results of a published peer-reviewed article are handled with a technical comment and response in the publishing journal. However, despite repeated requests by us and a request by the journal Energy itself, EIA refused to submit such a comment, preferring instead to publish their criticism as a working paper on the EIA website.

Due to the importance of this topic, we provide a point-by-point response to EIA’s critique in this paper. While more detailed responses are below, our specific high level responses are:

  1. EIA seems to misunderstand the rather speculative and reflexive nature of our language in the article, where we are careful to admit—not having full access to EIA’s underlying model or all assumptions—that parts of our analysis are not objective Truth but an interpretation of a possible truth. This is how much of social science energy work progresses.
  2. Our overall method of comparing EIA’s projections of future renewables to actual outcomes is neither incorrect nor misleading. The accuracy of one of the most important energy projections in the United States is a valid subject of study, regardless of whether EIA calls it a projection or a forecast. Our analysis is similar to the one EIA itself conducted except it was done independently and certainly more comprehensively in some respects.
  3. EIA misrepresents and misreads our analysis of how their model handles state renewable portfolio standards (RPSs). While there are some minor issues in our original study (resulting from misleading or incomplete documentation from EIA), the underlying factual issues we identify remain: EIA’s consistent overprojections of biomass generation impact how they model state policies while issues with their representation of REC trading remain.
  4. EIA’s argument that NEMS accounts for either price volatility or uncertainty is absolutely false, based on a fundamental misunderstanding of how either of these factors impact decision making in electricity.
  5. EIA’s argument that environmental externalities do not impact the growth of renewables is in stark contrast to what is actually occurring in the real world.

Our specific, more detailed responses to EIA’s criticisms are below:

Academic language and scientific truth in the social sciences

Given that we were conducting our analysis independently from the EIA, we were only able to conduct our analysis based on information published by the EIA itself. When publishing the AEO, EIA provides highly detailed outputs of their projections that far surpass that available from any other energy model. Further, while their documentation is not perfect, it is highly detailed in many respects.

Nevertheless, we did not have full access to all of the data inputs, the underlying model itself, nor all assumptions behind EIA modeling parameters. That’s why we constructed our analysis as hypotheses based on observations from the model output (observations which the EIA, notably, does not dispute).

Thus, as is often the case in research, we had to infer. We were very careful throughout the article to call attention to areas where we relied on an interpretation and so could have gotten things wrong. This is because we believe that models and econometric tools such as those the EIA employs are not merely tools of analysis or truth; but tools of persuasion and power that paint the energy sector a particular way. These perspectives get packed “into” the model itself, one of the more lasting contributions we hope our article demonstrates.  Realizing this complexity behind models, and harboring a healthy skepticism based on the potential limitations in our methods, we took great care to think about and frame our discourse accordingly.

For instance, in our discussion about whether the EIA model does or does not allow unlimited REC trading between all regions of the country, we specifically hedge our language by noting that:

“Second, the renewable constraints equation implies the potential for unlimited renewable credit trading between all regions in the country… Accordingly, the unrestricted trading assumption implied by the NEMS equation does not reflect the actual policy environment.” (emphasis not in original)

These parts of that paragraph are clearly dealing with our interpretation of a specific part of their documentation. Based on this interpretation, we decided to test how things were without trading (“we conducted an analysis that assumed no trading between regions.”)

Moreover, we would note that many of our conclusions regarding existing RPS policies are qualified and are supported by both our original and current analyses (emphasis not in original):

Abstract: “Key modifications suggested by the study include… properly modelling state renewable energy mandates”

End of the RPS section: “These findings confirm our hypothesis” that EIA’s treatment of RPS policies “influence” underprojections of wind and solar.

Conclusion: “… our analysis indicates that the AEO is likely not modelling existing” RPS policies correctly.

Thus, in a way, the EIA misconstrues parts of our analysis as being more definitive than they are.  Much of their criticism over “facts” are more about analysis and interpretation. If we misunderstood specific aspects of the EIA’s model it is due to inherent limitations in analyzing a complex model only through publicly available documentation. Our hypothesis-driven method, based on undisputed data observations, was designed to maintain the robustness of our conclusions despite these potential limitations. We would urge the EIA to consider a similar discursive approach—one that is more nuanced and open to the possibility of differing interpretations—when they publicly discuss their model and even publish their AEO.

There is one point that we believe is very important to emphasize here: despite the limitations of its documentation, EIA’s overall level of transparency is among the best in the field. This transparency was key in allowing us to conduct our study in the first place, even if a lack of complete documentation prevented our analysis from being as robust as possible. Comparably, a lack of transparency in model outputs, assumptions, and overall documentation hinders objective assessments of many other government, industry, or academic energy models.  This is partially what makes our correspondence with EIA so frustrating: they are among the best, but are surprisingly resistant and defensive when presented with critiques of their renewable projections.

EIA’s Point 1: “[The Study] misconstrues AEO’s projection as a forecast.  Since this underlies the framing of the analysis, the paper’s method of evaluation and interpretation of results are incorrect and misleading.”

We have five specific arguments against this.

First, there is no definitional difference between a projection and a forecast. Many definitions of one word use the other and they are synonyms of each other. Both have to deal with using current understanding to make estimates about the future. Further, calling it a projection instead of a forecast does not mean that the projection cannot be tested for accuracy. Finally, EIA appears to be inconsistent with itself on this matter: the website URL where EIA’s AEO is located uses the word “forecasts”.

Second, as far as we aware, there is no established difference between a projection and a forecast in the literature regarding energy modelling. It appears that EIA’s distinction of a difference between a projection and a forecast is only used or understood by them.

Third, while we believe the above points are definitive, it is also important to note that EIA’s projections are often treated as forecasts. Although EIA would disagree with the below characterizations, they are representative of how most organizations view EIA’s projections:

This only underscores that EIA’s limited disclaimers regarding the purpose of the AEO and its status as “not a forecast” have not been understood by its users (we believe this confusion arises because there is no actual distinction between projection and forecast). While a study examining EIA’s projections has value in its own right, our study only highlights that AEO’s projections should not be used as they commonly are.

Fourth, our study was testing the predictive value of EIA’s AEO reference cases as these are specifically used for economic, market, and policy analysis. Regardless of whether the AEO is a forecast or a projection, the accuracy of reference case projections compared to actual conditions is a critical factor in analyzing whether AEO’s policy analysis is reliable.

EIA apparently agrees with us, as their own study looking at their renewable projections compared past projections (from the reference case) to actual results. However, unlike our study, EIA’s analysis of their past projections is not comprehensive and selectively highlights EIA’s most accurate projections. Similarly, although it does not address renewables, EIA regularly issues a retrospective review comparing AEO reference case projections to subsequent reality.

Fifth, especially in light of the last point, our study and its methods are entirely acceptable. Our paper has major substantive value to the question of whether these projections reflect what happens in reality. Indeed, our conclusion of EIA’s projections of renewables almost directly contradicts the conclusion in their analysis of the issue. For that alone it has probative value as to whether AEO projections reflect reality and why.

There could be two reasons that they do not: (1) the assumptions that EIA uses (policy, cost, other) do not match what subsequently happens and (2) the methodology used by the EIA has caused the projections and reality to not align. The answers to these questions, which our study seeks to address, are a critical step in understanding the proper role of AEO projections and how they should be interpreted by policy makers (with implications more broadly for energy modelling).

EIA’s Point 2:[The Study] misreads and/or misunderstands the published NEMS documentation, in which is detailed how NEMS meets state-level RPS requirements.”

Our analysis on this point is entirely based on our reading of NEMS documentation. EIA has informed us that this documentation is incomplete (and in our view either misleadingly or overly vague).

Specifically, based on a reasonable interpretation of EIA’s documentation, our study asserted that biomass co-firing was improperly being used within the model to comply with state RPS policies, in contravention to actual legal requirements. EIA has specifically informed us that their model does not allow biomass co-firing to comply with RPS policies that do not allow them. This was not in the documentation.

However, this does not ultimately affect our conclusions. Our assessment of biomass co-firing was based on a key observation that we made: EIA’s projections of biomass co-firing was unrealistically high and could only be explained by it being used to meet RPS requirements. We described this finding in more detail in section 4.4 in our original paper, and EIA has not challenged this factual point. In re-analyzing this issue, we have also discovered that EIA’s projections of distributed biomass generation were unrealistically high. Both EIA’s projections of high levels of biomass co-firing and distributed biomass are in contravention with any market expectation regarding RPS compliance.

Thus our hypothesis that EIA’s inaccurate biomass, solar, and wind projections result where their model improperly handles biomass generation in RPS modelling remains, and this is well supported by the factual evidence.

Despite not discussing this issue in their own review, EIA admitted in correspondence with us that inaccurate biomass generation was a known issue within their model. As far as we can tell, they have addressed this issue in the most recent AEOs (2015 and 2016). Nevertheless, this was very clearly a major issue for the time frames we examined (2004-2014).

EIA also argued that our analysis of trading issues within their RPS modelling was a “misreading and/or misunderstanding” of their documentation. Rather, we argue that it is EIA’s description of our REC trading analysis that is inaccurate and misleading. Contrary to EIA’s assertion, we never explicitly state that NEMS has unlimited renewable credit trading between all regions of the country and our analysis is not predicated on such an assumption.

On this specific point, we have found out NEMS’ documentation is incomplete (and in our view, misleading). In its critique, EIA indicated the following: “Although the documentation does use the term “credit” to describe it, it is effectively modeled as a physical trade, not as a REC.”

However, our analysis of REC trading was not based on the assumption of unlimited REC trading, nor does it depend on the modeling mechanism EIA used (i.e. a physical trade versus a separate REC). Rather, our reading of EIA’s documentation led to us testing how the model output compares to actual REC trading patterns.

We did this by conducting the analysis described in the paper – we looked at how regional RPS compliance without regional trading led to trading deficits in certain regions. By identifying which regions needed REC imports for RPS compliance in the model, we could then compare model output for these regions with actual interstate REC trade.

Critically, there are some limitations to this analysis – it is a more restrictive version of trading than exists in reality and is limited to what we could obtain from AEO output (so in some cases our analysis did not include some hydro eligible for state compliance). However, based on the size of the regional deficits (how much interregional REC trade would be needed to meet a region’s RPS requirement in the model) we found that interregional trading was an issue in NEMS treatment of existing RPS policies.

Specifically, the implied trading patterns and needs in California, RFC, and New York consistently would require out-of-region RECs in the model well beyond what they take from out-of-state RECs in reality. California is a good example – in many timeframes the model output consistently has 33-50% of RPS compliant renewable energy coming from out of state. In reality, the number was 28% in 2013, but will be decreasing over time due to new restrictions.

EIA challenges this analysis by arguing that “conducting an analysis of renewable energy mandates assuming no trading between regions would not be consistent with current RPS laws that allow for trading.” EIA is correct – state laws are not a major barrier to interstate REC trading. However, EIA’s model does not account for differences in REC tracking systems and other practical considerations. These tracking systems effectively limit REC trading in ways that state laws do not. Accordingly, if EIA were to more fully account for historical REC trading patterns, their projections could improve substantially.

As a final point, EIA’s general criticism of this section of our paper is partially based on cherrypicking our analysis in a way that does not accurately reflect our analysis.

A specific example (and one EIA has highlighted) is the sentence “Essentially, we tested whether NEMS actually models renewable energy mandates the way that it claims it does.” Taken in isolation, this sentence is not a correct characterization of our analysis. Rather, it is a specific description of the immediately preceding sentence which describes how we calculated NEMS’ RPS requirements for individual regions. The way that we made this calculation is indeed how NEMS calculates regional requirements and has been corroborated with other EIA analysis. Again, this sentence is inaccurate if taken in isolation, but not inaccurate when understood within its intended context.

EIA’s Point 3. “[The Study] incorrectly asserts that NEMS overlooks price volatility and the uncertainty associated with the potential for future legislation in decision-making.”

Our analysis on this point is 100% accurate and is an important substantive point. Although our paper specifically identified natural gas price volatility, wholesale electricity price volatility is a closely related and, in some ways, more important issue. Indeed, EIA’s model does not in any way represent the wholesale electricity markets responsible for compensating more than two third of U.S. electricity generation. Future changes in these markets, which EIA’s model cannot represent, will have critical ramifications for all major energy sources.

Specifically, EIA argues:

“The contention that NEMS does not account for volatility is not supported by corroborating evidence in the paper, and it is in fact incorrect; therefore, the claim in this section is unsupported.  NEMS, although not capturing price volatility directly as a stochastic model might be able to do, uses several techniques to capture the effect of the uncertainty of future prices.  The principle method is to publish side case results with different fuel prices, which were not considered by this analysis.”

EIA is referring to different price scenarios in AEO cases besides the reference cases. These price scenarios for a specific fuel (i.e. natural gas) are very different than price volatility. A price scenario assumes a specific annual average price for each year and the model uses that price in calculating capacity expansion and generation decisions. Price volatility refers to the specific characteristic of electricity and natural gas price movements on an annual, monthly, daily, hourly, and even sub-hourly basis.

These price variations have a specific effect and heighten uncertainty about future prices considerably. When considering the uncertainty about both future natural gas and electricity prices, the fixed cost nature of renewables is a well-established, attractive hedge that is not reflected in EIA’s modelling.

Further, while the AEO cases have price scenarios NEMS as a model does not. We were specifically examining reference cases as they are the most widely used (by EIA and other). Within any specific reference case there are no price scenarios. Price volatility (and its associated effects) is thus an aspect that is not considered in EIA’s modelling and is a valid concern.

EIA also asserts that the following addresses price volatility:

“Even in the reference case, the technology-specific hurdle rates that are used to determine whether each technology is economically competitive are considered to capture the higher costs of capital required by technologies that are dependent on fuels with volatile prices.”

These technology specific hurdle rates do not address any issues related to natural gas or wholesale electricity price volatility in any way and are wholly inappropriate. Further, we believe that this statement is a factually inaccurate representation of cost structures for electricity generation. Technologies that use fuels with volatile prices (particularly natural gas) actually have relatively low capital costs. There is no established relationship between capital costs and volatile fuel prices. Fuel price volatility impacts perceptions of risk and uncertainty – increasing capital costs slightly is not an accurate way to capture these factors. Moreover, this adjustment only applies to new builds within the model. It does not in any way impact existing power plants, which face daily issues with price volatility, which are then reflected through variations in wholesale electricity prices, when then lead to consumers valuing the fixed cost nature of renewables.

Regarding uncertainty for future legislation:

Specifically we conclude: “However, the current structure of NEMS is limited in its ability to determine how decisions to build renewable energy may be driven by decisions to limit emissions that are not currently controlled by regulations.”

This was referring to the fact that the environmental characteristics of renewable energy is something that is reflected in multiple aspects of decision making regarding renewables: in policy, by organizations desiring a “green” appearance, and in specific decisions weighing renewable energy against other types of fuel. The counter example provided by EIA, stating that future legislative uncertainty is represented by a restriction on building new coal plants illustrates our point: NEMS only has a limited ability to determine how uncertainty related to future environmental regulations impacts today’s decisions.

EIA’s Point 4. “[The Study] uses speculative assertions about how environmental externalities, such as water use and air pollution, drive the penetration of renewables as evidence of shortcomings in the NEMS model.”

This one is pretty straightforward, so we will keep it short. EIA does not charge that there are any factual inaccuracies or misrepresentations of their models here. Rather, they disagree substantively that these environmental externalities have historically been a driving factor of renewable energy. There are many, many examples that we could point to (green power programs, preemptive concerns about carbon regulation, regulatory risk mitigation, consideration of water in electricity) that are not measured nor valued with the NEMS model.


While we thank the EIA for taking the time to read and engage with our article, and also have learned from their more detailed explanations about how NEMS works and some of its assumptions, we ultimately remain troubled.  The central core of our argument, that EIA forecasts of renewable energy remain inaccurate, remains largely unrefuted.  Moreover, EIA modelers such as David Daniels (Chief Energy Modeler for the entire EIA) and Chris Namovicz (Renewable Electricity Team Lead) seem to know their model does not work as well as it should, in particular with regards to renewable energy; they disagree primarily with us over the reasons why. If true that they seem to know more about its flaws than us, then the next logical question is: why don’t they fix them?

Their correspondence with us—when they state that they have “nothing further to gain” by engagement with those outside the EIA or perhaps merely outside the narrow modeling community—may provide a partial answer. Apart from being flippant, their remark does suggest a plausible reason why the EIA models have remained inaccurate—or, as we suggest, possibly biased—over such a long timeframe. As we noted in the conclusion to our original article in Energy, apparent errors in EIA modeling related to renewables have occurred during both Republican and Democratic administrations, and over the tenure of four very different Secretaries of Energy and three heads of the EIA. As their dismissive attitude may indicate, perhaps the problem is not with the model itself, but the modelers.