From: CATO at Liberty
By Derek Bonett
***
In comes Daniel Walters, a Regulatory Fellow at the University of Pennsylvania law school. His most recent law review article empirically tests the above hypothesis: that agencies will promulgate vaguer rules in the aftermath of the Auer case than before. He finds that this is not the case, and – using an empirical approach to the study of law that ought to be much more popular – cannot dismiss the null hypothesis that there is no change in the measured vagueness of federal regulations before and after Auer. In this blog post, I would like to highlight a major shortcoming of Professor Walter’s otherwise commendable methodological effort.
***
Yet the Simpson’s Paradox can be fractally applied to intra-agency change over time as well. It seems reasonable to assume that any given agency has a portfolio that consists of multiple “sub-topics”. The Department of the Interior might issue rules pertaining to land use, water rights, and Native-American reservations. Some of these sub-topics will tend to be inherently more technical (less vague) than others, as will the specificity of the multiple, distinct statutes which grant an agency authority to do X, Y, and Z. So, just as one must assume that the relative proportions of agency contributions to the aggregate metric remain constant pre/post Auer in order to dismiss the alternate hypothesis, so too must one assume a constant composition of intra-agency sub-topic composition over time. If there happens to be a secular trend across all agencies to spend an increasing proportion of their total rulemaking activity on the inherently more technical sub-topics in their portfolio, this could occur right alongside a secular trend toward increasing vagueness across all sub-topics and yet get masked in the overall agency figures, not to mention the aggregate dataset.