What did we learn?

Economists clearly favour a scientific method whereby hypotheses based on theoretical postulates are tested against objectively selected quantitative data while other social scientists would consider social norms and the power structure as relevant. Those working in government expressed a preference for a participatory process in which all types of qualitative information could be moulded into an overall set of evidence that was implicitly weighted and interpreted in terms of the governmental framework. Acceptable evidence could be qualitative, quantitative, experiential and even of the 'story' type. A distinction can be made here between an ethnographic approach in which close observation is valid evidence and anecdotal evidence that may illustrate but is not generalisable. There is, however, an inevitable tension between specificity and generalisability. Among the academics there was the view that a wide variety of qualitative and quantitative approaches are valid and may be necessary to offer a rounded picture; and that no data sources need to be privileged. However, crucial data were often perceived to be held by firms and collecting societies that may have an interest in controlling access and use for their own policy purposes.

This led to a discussion of the matter of bias in evidence. While evidence produced by interested parties should not be dismissed out of hand as tainted as it could have validity in its own terms, it is also the case that identifying where data manipulation or other bias is present may be exceedingly time-consuming. Experiments were suggested as a methodology that enables sidestepping some of these issues. The need for replication or at least replicability was seen as important, meaning that testing a hypothesis with evidence could in principle be repeated and, indeed, such tests were desirable from the methodological point of view - the 'verifiability' advocated by the IPO's Guide to Good Evidence.

By way of introduction, Kris Erickson and Martin Kretschmer presented their study commissioned by the IPO on the economic effects of parody (a parody defence to copyright infringement is currently not provided by UK law) to demonstrate how they set about the research and interpreted the results to produce evidence for policy purposes. This presentation [link to video] was referred to by several speakers subsequently as being illuminating, suggesting that presentations of this sort could demonstrate how academics conceive of research. The difficulty of obtaining data for research on copyright was also an issue, though public consultations in the UK make submissions available as a default. While this has enabled researchers to trawl through submissions, respondents are not required to make all their data public, though, and may submit in confidence. The context of evidence may be significant for interpreting its meaning. Some participants had difficulty in accepting the basic requirement of social science that generalisation is essential. Nor may it be possible for the government to have an overall unity of approach to policy-making: the example came up of the difference between applying the rules for competition policy and for copyright.

The conditions for good quality research meeting academic standards are hard to achieve in a policy-making setting. Government usually has a relatively short time frame and wants answers much more quickly than that which is usual for academic research. Commissioned research often has a short turnover time that favours professional consultancies over academia. The same applies to stakeholder evidence - those who are well financed to present data and other evidence have the advantage in a short time frame over those who cannot spend the resources to get it. This may put small enterprises and consumer groups at a disadvantage, especially in comparison to large well-funded industry lobbying bodies, including international organisations that seek to influence policy. Besides the several enquiries in the UK, evidence is also frequently sought by the EU and international forums, with the result that evidence is not targeted to a specific enquiry. Often the same evidence is presented in a generalised manner to the current body asking for it. The UK government sees stimulating economic growth as the objective of policy, including for copyright; others have different objectives, such as enabling the EU common market to function.

Several conclusions can be drawn from the Symposium. It was hard to avoid the impression that there is very little truly objective evidence and therefore judgement was required to assess evidence for policy purposes. Though there was apparent agreement on the wide range of methodological approaches, a pick-and-mix approach (as often adopted in government summaries of consultation evidence) has to be misleading. There is a tremendous difference between the scientific method of testing with that of a mash-up of evidence of all sorts that requires policy-makers to adopt an unspecified set of weights depending on the stance of the current Government.

Two proposals to improve evidence-based policy making in the field of copyright emerged.

Quality filter

The Intellectual Property Office laudably aspires to clarity, verifiability and the potential of peer-review for evidence submitted but those characteristics cannot be said to apply to the policy-making process itself: there policy-based evidence still appears as valid as evidence-based policy.

In order to address the potential bias of evidence submitted to public consultations on copyright, a quality filter may be needed. It was felt that the legal concept of admissible evidence for litigation (in the common law tradition) here was not very helpful, as it allows anything that is potentially persuasive to bear on the findings of fact. Some argued that a more interventionist stance was needed, as in judge-led public enquiries where evidence may be examined under oath. Others felt that a small panel of independent scientific advisers could be tasked with sifting and reviewing the quality of submissions.

Process design

A second promising idea may focus on the process of assembling evidence, rather than on the assessment of the quality of submitted evidence. This might require a careful articulation of the burden of proof for change, and the opening up of public consultations beyond organised stakeholder groups. If at the heart of copyright law is a trade-off between under-production and under-use of creative goods, the process of policy making may have to reach out consciously to a much wider range of digital innovators and users, and perhaps direct resources for producing evidence into new areas.

In the recent past, the UK government has repeatedly adopted copyright policies in contravention of the findings of independent research, for example in supporting copyright term extension for sound recordings from 50 to 70 years against its own commissioned review of the evidence (CIPIL, 2006; Directive 2011/77/EU), and introducing without any evidence base a provision that would extend copyright in the artistic features of mass-produced designs from 25 to life-plus-70 years (Bently et al., 2012; Clause 56 of the Enterprise and Regulatory Reform Bill, currently before Parliament). The Intellectual Property Office laudably aspires to clarity, verifiability and the potential of peer-review for evidence submitted but those characteristics cannot be said to apply to the policy-making process itself: there policy-based evidence still appears as valid as evidence-based policy.