That nicely straightforward formulation comes from the Department of Commerce’s information quality guidelines from 2006.  It refers to the requirements the Department places on research used in policymaking, and is a typical interpretation of the OMB guidelines on the subject from 2005.

I mention it because today is the deadline for comments to this year’s Special 301 process, and the quote figures in the comment we (SSRC) filed to the US Trade Representative’s Office, which oversees the process.

Special 301 is a mechanism for issuing warnings to other countries regarding their lack of compliance with US views on intellectual property policy and enforcement—and, in theory, for expediting sanctions when those countries don’t heed the warnings. It is the main source of international pressure for stronger enforcement and stronger IP protection.

The Special 301 process has traditionally been a closed circuit between government and industry, with such features as 1-day comment windows, which allow industry to complain but make it impossible for targeted countries to respond.   Recently, however, there have been a few signs of openness, including the first-ever hearing scheduled for March.  The process is also attracting quite a bit of attention this year because of  parallel USTR efforts to negotiate the Anti Counterfeiting and Trade Agreement (ACTA), which have been secretive and controversial.

The SSRC comment pulls together work from our upcoming piracy report on transparency, participation, and evidentiary standards for Special 301.   The full comment is here.  I’ve posted excerpts below the fold:

…..

Until recently, there was little pressure for greater participation or procedural transparency in the Special 301 process.  The industry-centered institutional culture of the USTR discouraged it; the most obviously affected parties—other countries—had no meaningful standing; and the traditional obscurity of trade policy sheltered it from the public attention directed at other powerful policymaking bodies.  The legal status of Special 301 reinforced these tendencies.  The Special 301 process is an “informal adjudication” as opposed to a rulemaking process.  As described by the Administrative Procedures Act, a rulemaking is forward looking, whereas adjudication traditionally describes a more technical determination of rights and responsibilities based on existing rules and past conduct.  In our view, this distinction misses the primary function of Special 301 as instrument for changing other countries’ policies, and in shaping negotiations that ultimately bind the US.

The informality of the process also plays an important part in shaping participation and accountability.  As the term suggests, ‘informal’ processes leave considerable leeway with respect to procedures.  Notably, they do not have to be “on the record after the opportunity for an agency hearing.”   Because there have been no hearings, there is virtually no record of how decisions are made.

The term has nonetheless been subject to a variety of legal interpretations and clarifications regarding what constitutes due process in such contexts, with strong consensus in the courts that  “a minimum procedure must include at least some form of notice and an opportunity to be heard at a meaningful time and in a meaningful manner.”  In our view, the Special 301 process has been out of compliance with a reasonable understanding of this standard.  Minimal and, in our view, still inadequate notice was made possible only in 2008.  A meaningful opportunity to be heard will take place for the first time in 2010.

Despite this obscurity, the USTR has to meet certain basic requirements to justify its findings, including acting on the basis of evidence collected during the Special 301 process.  With some 50-60 countries placed annually on the Watch Lists, the research requirements of the Special 301 process are considerable.  The USTR’s role in this process was never clearly defined by statute and quickly defaulted to industry, which ramped up its research capacities throughout the 1990s to meet the new demand.   This division of labor quickly became embedded in the USTR’s internal organization:  in 2009, only eight USTR staff worked on IP issues.   Most of the findings, legal recommendations, and country detail discussed in the Special 301 report simply recapitulate the submissions of the major industry groups.  In our work on copyright, by far the most important group is the IIPA.   Among the 54 countries listed by IIPA for inclusion on the Watch and Priority Watch lists in 2008, the USTR accepted 46 (or 84%).   This close relationship goes back more than 20 years: the IIPA was instrumental in the creation of Special 301, and the two can be fairly described, in our view, as the research and policy sides of a larger collective enterprise.

The Special 301 comment period is part of the annual effort to gather “any information as may be available to the Trade Representative and … as may be submitted by interested persons” (19 USC 2242(b)(2)(B)).  Interested persons can include other countries, non-US industry groups, non-governmental organizations, and—in theory—individuals.   In practice, it has overwhelmingly meant US industry.  The USTR’s interest in hearing from other parties has generally been viewed as negligible, and this perception has been reinforced by the unusual restrictions on the comment process itself.  Until 2008, all comments were due on the same day—a requirement that made the notification of countries and same-year replies to complaints impossible.   Under these circumstances, only a handful of countries (and typically no NGOs) bothered to submit comments at all, and the few that did generally responded to the previous year’s comments.

Under new rules that went into effect in 2008, countries (but not NGOs or other parties) were permitted two additional weeks to submit comments after industry submissions were received.  This small opening had a dramatic effect on participation: the number of countries submitting comments jumped from 3 to 24.

Special 301 Comments

2007 2008 2009

Companies and industry groups

21

19

30

Countries on previous 301 lists

4

3

24

Individuals

0

2

1

Nonprofits

1

0

0

Source: Flynn et.al, 2009

The spike in comments was also marked by a perceptible change in tone.   Traditionally, foreign countries have been deferential in their dialogues with the USTR—often highly so.  Country comments typically catalog the actions taken in the past year to meet American wishes, and on that basis request removal from the watch lists.  Local policy and enforcement activities in targeted countries also often follow the seasonal rhythm of the Special 301 process, as governments seek to head off placement on the watch lists.

…..

Like other government agencies, USTR has been subject to recent requirements to adopt higher evidentiary standards and more transparency about the research it uses in policymaking.  Much of this pressure has originated with industry groups looking for tools to head off unwanted regulatory action resulting from federally-funded scientific research.  This is the background, notably, of the 2000 Data Quality Act, which established procedures for complainants to challenge data used in policymaking.  While many view the Act as a victory of lobbying over science, the interesting question for agencies like the USTR is what the Act implies in contexts where there is no scientific research culture to undermine.

In 2005, the Office of Management and Budget issued an interpretation of the Data Quality Act that required peer review whenever the Federal Government disseminates “scientific information [that has] a clear and substantial impact on important public policies or private sector decisions” worth more than $500 million (Office of Management and Budget 2005).  The OMB specifically included economic and other policy-relevant research under this rule.   It noted further that a comment process, in which contending parties submitted and challenge each other’s comments, is not an adequate substitute for peer review.   When the Department of Commerce implemented of the OMB directive in 2006, it placed emphasis on “transparency – and ultimately reproducibility” as the crucial standard in policy research.  It clarified further that transparency “is a matter of showing how you got the results you got” (Department of Commerce 2006).

The outsourcing of research to the IIPA and other industry groups allows the USTR to exempt Special 301 from such quality control efforts.  Nothing in the Data Quality Act or OMB bulletin addresses transparency requirements for privately-produced research, or discusses how to improve policymaking processes that depend entirely on it.  The absence of hearings or a reasonably structured comment process ensures, further, that Special 301 fails to meet even the lower evidentiary standards of a robust adversarial process, in which comments from diverse stakeholders are solicited and weighed.  The USTR does, nonetheless, set two modest requirements for submitted comments.  It specifies that (1) comments should “provide all necessary information for assessing the effect of the acts, policies, and practices”; and (2) “any comments that include quantitative loss claims should be accompanied by the methodology used in calculating such estimated losses.”

By any reasonable standard, these requirements go unmet.  Unsurprisingly, industry reporting presents the industry case, and in the copyright context these cases are extremely narrow—made almost entirely without reference to the wider dilemmas that structure piracy in other countries or the manifold complexities of determining net impact on industries or societies.  Industry associations do publish short general descriptions of their methods—in the IIPA’s case, in the methodology appendix to its Special 301 submissions—but little about the assumptions, practices, or detailed findings of their work.  IIPA findings, as a result, are generally impossible to verify or reproduce.  Because IIPA is, in most cases, simply aggregating research from other industry associations such as the BSA, RIAA, and MPA, the key questions must ultimately be addressed to them.   We have tried to do so, but without much success.

It is impossible, for example, to evaluate BSA findings on rates of business software piracy, for example, without understanding the key inputs into the model: how they calculate the number of computers in a country; how they estimate the presence of open source software; or how they model the ‘average software load’ on machines in different countries.   It is impossible to evaluate the MPA’s claims from its major 2005 international consumer survey without knowing what questions the surveys ask and how they calculate key variables such as the displacement rates between pirate and licit sales.  IFPI aggregates consumer surveys from its local affiliates, but indicates that each affiliate makes its own choices about how to conduct its research.  There is no general template for the surveys—nor, for outsiders, any clarity about how IFPI manages the obvious challenges of aggregating the studies.  RIAA—drawing on the same data provided by local affiliates—does calculate losses for countries it deemed high priority targets for enforcement, through methods it also attributes to the local affiliates.   Although ESA research has only made claims about the street value of pirated games—some $4 billion in 2007—and expressly avoids the language of industry losses, its figures found their way into the industry loss column in IIPA reports.

Every report has its own secret sauce—the assumptions that anchor the methods and inform the results.   Few of these assumptions are public.  The typical rationale for withholding such information is that the underlying research relies on commercially sensitive data.  This is certainly possible in some cases—notably in the case of sales figures, about which some companies are secretive.  But it can hardly explain the across-the-board reluctance of industry groups to show their work.   This is a key difference between an advocacy research culture, built on private consulting, and an academic or scientific research culture whose credibility depends on transparency and reproducibility.

In our view, this secrecy has become counterproductive in a context in which hyperbolic industry claims have undermined confidence in industry research.  Criticizing MPA, RIAA, and BSA claims about piracy has become a cottage industry in the past few years, driven by the relative ease with which headline piracy numbers have been shown to be wrong, made up, or impossible to source.   The BSA’s annual estimate of losses to software piracy—$53 billion to US companies in 2009—dwarfs the other industry estimates and has become an iconic example of the commitment to big numbers despite obvious methodological problems—notably, the BSA’s continued valuation of pirated copies at retail prices, a practice abandoned by all the other industry groups.  Widely used estimates of 750,000 US jobs lost and $250 billion in annual economic losses to piracy have proved similarly ungrounded or based in decades-old guesses (Sanchez 2008).

…..

In our view, these questions about evidence and participation bring into relief the tensions in what appears to be a transitional moment in the global IP and enforcement regime.  Since the inauguration of the WTO in 1994, the USTR has operated in a position of ambiguous legality and soft power—able to threaten countries through Special 301, but (mostly) unable to implement unilateral sanctions for fear of generating an adverse WTO ruling.  The stability of this position, in our view, was the product of a number of factors, including the industry’s virtual monopoly on the evidentiary discourse around piracy; the disorganization of developing-country coalitions on IP policy; and the general obscurity of copyright and enforcement issues, which allowed IP policymaking to fly under the radar of most consumers and public interest groups.  Where all of these factors held true six or seven years ago, it is difficult to make a strong case for any of them today.  Industry research has been widely delegitimized by the excesses of its advocacy campaigns; developing countries are more organized and assertive with regard to IP policy; and enforcement has begun its ‘consumer turn’ toward measures that are likely to make traditionally closed policy venues like USTR much more visible and controversial in the public eye.  A more transparent and participatory Special 301 process is, in our view, the only viable way forward for all parties.

 

 

Posted in: Data, Policy Ideas, Research, Uncategorized on February 18th by Joe Karaganis