Social scientists' view of evidence for copyright policy

Speakers

Dr. Christian Handke (Erasmus University Rotterdam) - hereinafter (CH)

Tom Hoehn (Visiting Professor, Imperial College) - hereinafter (TH)

Dr. Joost Poort (IvIR Univ. Amsterdam) - hereinafter (JP)

Dr. Nicola Searle (Abertay) - hereinafter (NS)

Dr. Davide Secchi (BU) - hereinafter (DS)

Chair: Prof. Philip Schlesinger (Glasgow) - hereinafter (PS)

Questions & Answers:

Paul Heald (Illinois & BU) - hereinafter (Paul Heald)

Ruth Towse (Professor, Bournemouth University) - hereinafter (RT)

Robin Jacob (Professor, UCL) - hereinafter (RJ)

Will Page (Director, Spotify) - hereinafter (WP)

Andrew Prodger (CEO, BECS) - hereinafter (AP)

Lee Edwards (Lecturer, University of Leeds) - hereinafter (Lee Edwards)

Martin Kretschmer - hereinafter (MK)

(PS) We've got a bumper panel of talent here, I hope. I'm Philip Schlesinger; I'm Professor in Cultural Policy at the University of Glasgow, a Deputy Director of CREATe, the new copyright centre. On my left Christian Handke, Tom Hoehn, on my right, Joost Poort, Nicola Searle and Davide Secchi - so that's the line-up for this afternoon. I don't know what any of them is going to say, but they will speak for no more than seven minutes - I hope.

Just by way of perhaps an introductory comment, picking up on the evidence sessions that we had earlier, I do think the IPO's position, as enunciated, is rather incoherent. What's on paper is really quite a positivistic and scientistic conception of research. The only way I can interpret why it takes that form is that it's there to protect against lobbying and basically, interest groups. Because it really doesn't bear much relationship to how social scientists go about their work - at least not this one. I think also that it does privilege the quantitative over the qualitative. And I think we've heard some quite eloquent arguments about why cases and qualitative accounts may actually be taken rather seriously as evidence and give us important insights into the field that we're discussing.

Really just to say a couple of words about a study that Charlotte Waelde and I concluded last year which used I suppose what you might call qualitative sampling, which was a study of dancers and musicians of a precarious kind and their relationship to copyright (Waelde and Schlesinger, 2011; Schlesinger and Waelde 2011; Schlesinger and Waelde 2012). And one of the interesting things that came out in that part of the world, in those kinds of less privileged sectors, was that copyright did not figure as important. We had started off, perhaps, with the usual assumption that copyright would matter greatly. It's not that it doesn't matter; it's just that making copyright have a significant impact on your income is such a big push that it doesn't matter greatly.

ESRC Evidence Symposium And then that opened up a whole set of other questions which I think it's really quite important for us to put on the agenda, like non-economic values, values of cooperation - not necessarily altruism, but where people will engage in collaborative behaviour without any immediate, or even medium-term, or even ever, a pecuniary return, because that's the nature of what it is to be in a creative occupation.

So, I'll just leave you with that thought because I have a bevy of economists here. And I am a sociologist; so I'll put the counter position if they don't put it. So, with no further ado let me go onto Christian Handke who will hold forth for the next seven minutes and no longer.

(CH) Thank you, Philip. Now, I don't want to talk about methodology either necessarily. Let me start with three points regarding today's discussion so far. First of all, I happen to agree with the IPO guidelines for good evidence. As you would expect from an academic I find peer review perfectly normal - even though I don't always appreciate the process when it happens to me. Nevertheless, I do believe in that process, and I find nothing offensive about applying it where contributions in response to calls for evidence are concerned.

Second, the ultimate aim of transparency is usually to enable replication. And I understand how that wouldn't feature prominently in those guidelines for good evidence because replication would be very exceptional in practice. Yet I think that's a good measure of whether transparency really has been achieved. To ask a hypothetical question: would we be able to replicate this study? If not, we should be more sceptical about the value of the evidence provided.

Anecdotal and qualitative evidence has a strong role to play in providing evidence for copyright policy. I would argue that quantitative studies and qualitative studies should be clearly distinguished, however.

Third, I also believe that anecdotal and qualitative evidence has a strong role to play in providing evidence for copyright policy. I would argue that quantitative studies and qualitative studies should be clearly distinguished, however. Maybe those submitting evidence should be encouraged to really decide when they're doing what. Qualitative evidence may inform us about specific problems and how they come about. For generalisation purposes across a large number of stakeholders, quantitative evidence is more important. The adequate use and interpretation of these types of evidence is crucial.

Let me move on to my main point. In my short statement I take inspiration from a very different aspect of the brief that the speakers received. One of the issues suggested was whether it matters what kinds of questions we ask - and I reckon that matters very much indeed.

First let me raise a rhetorical question: what is copyright supposed to achieve? We haven't really addressed that very much today. Maybe we're all in agreement; maybe not - we'll see. Now, I'm just an aspiring social scientist, but it would seem to me that one official aim of copyright is to promote innovation and creativity. Or to rephrase slightly, the question is: how does unauthorised use affect innovation and what can copyright do about it without excessive unintended consequences?

I suggest that we need to address this question. Maybe it's not the only one, but I think it should be rather central, and we should address it head on. Today that hasn't happened. Quite generally, it doesn't appear to happen much in the debate on copyright. Let me immediately put in a qualifier: innovation is a very tricky subject: it's a multifarious concept, it's complex, it tends to happen in fits and starts. As Bengt-Arke Lundvall (1992, p. 12) has stated, a strong element of randomness will always remain regarding innovation and technological change, and all related issues. It is really slippery and tricky topic. Today, I was impressed to hear Jeremy Silver using an avalanche of metaphors to describe some of the problems associated with assessing radical innovation in particular that requires people to fundamentally change how they cope with a problem.

In any case, the question of innovation is not the main theme in copyright debates. It isn't in the economics of copyright that I have a reasonable overview of. Today innovation wasn't central either I believe that is a serious oversight.

Let me further suggest that it's worthwhile to look at two different types of innovation in this whole issue. One is the obvious issue of content creation, as I would call it, concerning the supply of new copyright works. The question is how unauthorised use - or its countermeasure copyright - affects the flow of new copyright works, and thus future welfare. The other type of innovation would be technological innovation concerning means to disseminate these works. Technological innovation could be managerial, organisational, technical and so on.

There have been a couple of studies at least on the content creation part, which is the more obvious candidate, admittedly, but I don't think it's the only one that matters. Regarding content creation a handful of quantitative studies have been put out, not a single one of which finds that stronger copyright protection - for example variations in the duration of copyright or the emergence and diffusion of digital copying technology like file sharing - would be associated with more varied or more valuable supply. In particular regarding the effect of file-sharing on the supply of new creative works, there is no evidence that we would be worse off than 13 years ago, just before when Napster started operating. That is not widely appreciated. Perhaps it is possible to provide that evidence. I don't think that has happened yet and there are few people who seem to be trying.

Again, a qualifier: it's important not to jump to conclusions. The existing studies on copyright and content creation are preliminary. There's the potential for protracted effects. It might very well be that any effect of file sharing on the supply of creative works transpires with a long delay. I'm not convinced that we don't have a problem or we will never ever have a problem there. And furthermore things could have been even better. Remember how I mentioned the problems with randomness and uncertainty associated with technological change; fair enough, things could have been even better. A lot of work remains to document the impact of unauthorised use on content creation. So far, there is no evidence to support the intuition that unauthorised use reduces content creation. There's a potential that we will come across further counterintuitive results that won't go away if we look harder.

As a final point to round off the picture, there's the issue of technological innovation. I don't have time to discuss that extensively but I think that's also an important question: how does the copyright system as it is affect technological innovation in the copyright industries and in related sectors? This is particularly relevant where user innovation is more important relative to innovation conducted by current rights holders. And with that question I leave you.

(TH) I'm Tom Hoehn. I'm at Imperial College where I research on the area of IP, and I also teach a course called Business Models and IP. And I will rename it now Tarzan Economics - where the exam question last year was: explain the business model of Spotify. So, it's fascinating in that area. I also act as a panel member at the Competition Commission (www.competition-commission.org.uk/) where I'm a monopoly economist looking at how markets work or don't work. And I'm not speaking on behalf of the Competition Commission here when I make some comments about evidence gathering and the IPO rules.

There are three points; and I will choose to comment on two. One is clarity, clarity of evidence. I think we need to distinguish between clarity of presentation and the fact that when we look at data we often find data is messy, it's not very clear, it's difficult to interpret. Sometimes we have too much data and we have to test for various inferences; sometimes we have too little data and we still try to make some inferences. And it is hard work and is not always clear; but we'll try to make it as clear as possible.

There's another point about looking at data and getting some clarity: distinguishing between what is statistically significant and what is economically significant. And often people confuse the two. So, you may have a statistically significant effect - for example I did look at the effect of time extension for a copyright in sound recording, and I found that there was an effect; but it was a very small effect. And we need to be very careful what we then read into that, and we need to do a lot of thinking. So, thinking is very important.

What we want to do when we're presented with an argument or a piece of economic evidence is to see whether, given the data and the model, we can replicate the results... I've seen methods where there are data rooms that are created so that if people want to verify, check or replicate the analysis they can go there.

Second point, verification or verifiability. I prefer the term replicability. And we use actually that term at the Competition Commission (CC) where there is some guidance on providing economic evidence in competition proceedings. And transparency is there and then there is replicability. And that is very important. I make the distinction because what we want to do when we're presented with an argument or a piece of economic evidence is to see whether, given the data and the model, we can replicate the results. And that's a very important step. And that raised the question: how do you get access to the data? And there are various ways in doing that. Somebody on the first panel, it may have been Pippa, who said if you have data and it's an issue of confidentiality, you don't want to publish it, there are ways of dealing with it: you can give it to the ONS (Office for National Statistics), I think was one proposal. I've seen methods where there are data rooms that are created so that if people want to verify, check or replicate the analysis they can go there - they have to leave their Smartphone behind - and they can go on the terminal, take the data, run the analysis in order to see whether it can replicate it. So, there are interesting ways in which you can allow for verifiability, in IPO terms, or replicability; and I think I would encourage more of that.

And my other point about verification and replicability is this point made earlier in the day: we had a discussion about the duty to disclose data. I strongly believe that more data should be disclosed, particularly if you are a monopoly body or you have maybe a statutory body where you are collecting lots of information on behalf of an industry. And I think there should be a duty to disclose. There are reports that come out of government; again in my experience at the MMC (Monopolies and Mergers Commission, former name of the Competition Commission) or the Competition Commission these reports are very valuable research tools, but if they are heavily edited because of confidentiality that needs to be respected, and they become meaningless. And I think we should be very careful when we excise in these reports, when we do industry investigations, whether it is a body like the CC or a body like the IPO.

So, that's my point about verification. Then there is the point about, peer review I don't really want to talk about; I think we should be careful not to make it too academic. But there's the question of how do we get access to data and how do we share data. There is just something I would like to argue we should use more of, in policy making or policy analysis, and that's experiments. We do like natural experiments, we like looking at history, we like looking at data that has been collected; but when it comes to designing policy instruments we need to be very careful and think what is the expected impact. And some of these instruments are either new, aren't tested, they can't be checked with respect to history; or the situation is a new one that we're faced with, new business models. So, I encourage use of experiments. And you can use experiments to test willingness to share data; which is what we do at the moment in a project at Imperial College. So, what are the incentives that allow or get people to share data, that is medical data or personal data? And we can talk about what happens in Google when we go and answer off these search engines: we give away our own data and they re-use it. And then there's the question of give and take. And I think we should understand how that works and how context specific we react to different incentive schemes.

(JP). My name's Joost Poort. I work with the Institute for Information Law associated with the University of Amsterdam. And I work there as an economist. First of all thank you for the invitation to be present here.

Considering the IPO document, I have to start off noting that I endorse most of what's there. But also it is most of a Holy Grail what's presented there, saying: okay, peer review, data available, replicability - fantastic. But as was pointed out already, timelines or money often do not allow for peer review in the process, for research being published, peer reviewed in journals before the deadline of the consultation periods are over.

The question of course is what are the ways to get around that if you can't reach this Holy Grail? I think that's where the real interesting questions lie. And also data - publishing data appendices: particularly if work has not been published in peer reviewed journals yet, it will go against the interests of the academics involved if they still want to publish it in journals. So, they might even not want to disclose the data, not because they don't want people to look over their shoulder, but they need it for their future academic careers.

Peer reviewed is first best. But maybe a way to get around it would be to have an expert panel considering evidence that is brought into a political debate or policy making debate: to scrutinise research that has been presented by several parties.

So, indeed peer reviewed is first best I think. But maybe a way to get around it would be to have an expert panel considering evidence that is brought into a political debate or policy making debate: to scrutinise research that has been presented by several parties. And then of course the funder of certain research should not disqualify research, saying it's paid for by the industry so it's crap. But it could raise scrutiny saying: well, they do have an interest; it was not from independent sources of finance so we have to be really awake and alert to see if there is any spin put on this research.

I came across a very nice example quite recently: I did a survey on file sharing (https://www.ivir.nl/publications/poort/Filesharing_2012.pdf), amongst other things, and my finding was that file sharers are the largest customers of the industries, buying more music, streaming more music from legal sources, going to concerts more often. And the same result was found by Joe Karaganis from The American Assembly (Columbia University) (https://piracy.americanassembly.org/where-do-music-collections-come-from/) recently. And the journalist I came into contact with said, 'Do you know this research?' And the IPFI reacted to the other research, saying, 'Research by The NPD Group during 2010 in the US found that just 35 per cent of P2P users also paid for music downloads.' (https://www.ifpi.org/content/section_news/20121017.html). And he said, 'Are they lying? How was that in your research?' And I looked in my own data, and actually spot on 65% of people who had downloaded from illegal sources in the last year had not paid for downloading music in the last year; but from the people who had not downloaded from illegal sources in the last year, 92% had not paid for downloading music. So, they were presenting a fact, they were presenting evidence; but they were leaving out the important context to interpret this evidence.

It needs someone to be really into this debate and to really understand what's presented and what's left out to be able to interpret this evidence. It's a figure, it's a right figure; but still it's misleading to present it in an isolated way. So, I think that's a kind of neat example that you probably need to organise your peer review, even if it's not through the academic peer review process that normally takes two years.

Another point is just a question I'd like to raise: how to deal with foreign evidence in those debates, surveys from foreign countries; how do you accept them or not? Trends in society can be quite different in different countries. But what I see too little of is surveys that are set out across countries in exactly the same way, which could really point the way to how differences in policy affect people's behaviour and affect what is going on in businesses. Because what's done now mostly is surveys, set up in a different way with different questions with different framing, different timing, are more or less got together in one large stew. And I think evidence could profit a lot from international surveys done in the same way.

ESRC Evidence Symposium Then maybe to conclude, there were some statements about anecdotal evidence or case studies. I am very sceptical about anecdotal evidence because it's quite often off the mark. And that's because the debate on piracy and copyright issues is, it's a very polarised debate and a very fierce debate, and people will hang on to anecdotes that are in favour of their way of seeing things. For instance, talking about a parody, there is this famous bunker scene from Der Untergang [Downfall], and there were many parodies on that scene with different subtitles or different text, when Hitler was screaming to his commanders (e.g. https://www.youtube.com/watch?v=et76wPvRtgU). It's a famous story that these parodies were taken off YouTube because the company making the film wouldn't allow it. And the producer or the director of the film, Hirschbiegel, actually said, 'Well, I kind of like them; I laughed about them'. That story I heard it dozens of times, and it's a story we cling onto saying, 'Okay, the authors really don't mind so much; it's the big companies making money that are the obnoxious types that get this stuff off YouTube'. But then we did a survey amongst 5,000 creators and asked them questions, amongst other things, about how they related to remixing; and it turned out that the majority of them were really opposed to remixing, and they actually felt it was a threat to their earning opportunities (https://www.seo.nl/uploads/media/2011-17_Wat_er_speelt.pdf (in Dutch)). So, the story we like to believe that the authors are cool and the companies are the obnoxious types wasn't replicated in this research. So much for anecdotal evidence.

(NS) Nicola Searle. So, I'm going to be slightly self-indulgent and take a personal view of this. Because I'm the only economist in my family, it's a family of medical doctors and scientists, so when I think of evidence and I think of social scientists I think a lot of what's been happening in the medical sciences and the use of evidence based practice there. And also I want to comment that one of the things that I find interesting about the discussions we've had today is the fact that it's really difficult to have discussions about evidence without actually having discussions about evidence based policy. So, we've actually discussed more the policy in many ways than the evidence.

I'll start off with my dad and asked, 'What do you think about evidence based policy?' First of all he wanted to tell me all about economists, and his comment was that economists and doctors have many things in common, this includes the ability to walk on water. So, I think we should remember that we shouldn't develop god complexes in economics also. But evidence usage in medicine has long been the case; and looking at how treatments actually affect patient outcomes and how that should affect treatments and dosages is quite an important concern. It's also an important concern in terms of knowledge exchange and taking actual evidence from research and putting that into practice in the National Health Service (NHS). So, the NHS have been looking at a lot of these questions for a long time.

If I want to start my thoughts on evidence I can start with anecdotes, as I just mentioned. So, medical theory in some cases, tells us, for example, that betablockers would actually ease heart strain and result in less heart attacks; but in reality they actually increase mortality. But the flipside of that, hormone replacement therapy was meant to have a huge impact on breast cancer and things like that; but it ended up having these unexpected consequences that were not identified in the first evidence based analysis of this, because the evidence itself and the designs were flawed. And we talk about medicine we're talking about lives. And we're lucky that in social sciences we're typically not talking about lives; but it does mean that we have a lot of cases in medicine still where things like antibiotic courses and cough syrups and nail fungal treatments - I bet you didn't expect anyone to say nail fungal treatment today - are used, but actually there's no evidence to support that they actually work. So, the question is what type of snake oils are we using in copyright and in creative industries? What are these kinds of things that actually theory tells us work, everyone believes they work, but in reality they may not?

But I should say that the parallels with science and socioeconomic policy are limited because when we talk about evidence in science we're talking about very specific measurements. We're talking about salt content in blood, for example. Whereas in economics and socioeconomic analysis we're really talking about proxies on a regular basis: everything is essentially a proxy for something else. So, evaluations, utility, it's really not ever a direct measurement of what the question is. So, that step away from the sort of core question means that we're always talking about a slightly abstracted view of what the evidence potentially is measuring.

So, what is good evidence? And I'll show my bias as an economist and say it's quantitative. But at the same time, as we've mentioned multiple times today, evidence in IP is really difficult to collect. And I would argue that some evidence is better than no evidence. I've done some work on business models and trade secrets; both are notoriously difficult to measure: they're either secret, or how on earth do you measure business models. So, in cases like that and in a lot of cases we need qualitative evidence to either complement or perhaps illustrate areas where we are simply not able to do so. So, I think qualitative evidence has a very important role.

We have to remember that when we're looking at data, the data shouldn't theoretically have any bias; but it's in our interpretation that we can find a lot of bias.

I thought this would be a bit clich�, and I expected someone else to say this today but they haven't, good old pithy quote that: data don't lie but people do. And that's a big problem. So, we have to remember that when we're looking at data, the data shouldn't theoretically have any bias; but it's in our interpretation that we can find a lot of bias.

Also I'd like to say that I have very high hopes for the future of evidence in copyright, particularly in the digital media, because what we can do now with data mining and the amount of data that's coming through - if you look at computer games for example: entire games are now designed based on the data that they get from players. And that is really encouraging, or slightly disturbing - depending on your privacy position. There was a case in the States where Target [a large retail chain in the US] predicted a pregnancy in someone before the father even knew. This is the kind of data that we're actually getting. So, I have very high hopes that we'll get to a point in copyright where we'll actually be able to have very interesting things to say from data.

But the question I also want to think about was what constitutes appropriate evidence, was what we were asked. And the suggestion there is that there's also inappropriate evidence. And what is inappropriate evidence? And I think when we look at biased evidence and the interpretation of that evidence - and again it's not the data that lie; it's the people or the question or the measurement of that lie - that we should be very careful when we talk about data, and not relying solely on these kinds of interpretations.

So, I want to say that overall we've got imperfect evidence, but we are moving towards having better evidence. And I think some evidence is better than none. I'll go back to my dad again. He said, 'In medicine it takes five years to adopt a good idea, and 20 years to get rid of a bad one'. And I'd say that in copyright it seems to take us about a lifetime plus 70 years to get rid of a bad idea.

(DS) I'm Davide Secchi and I work at Bournemouth University. I study decision making in organisations. I want to be upfront, I know very little about copyright. I'm here to talk about experiments and experimental research and how it could possibly be applied to copyright research or law in general.

My starting point discussing this is one of the papers that was shared by Martin, and it's a paper by Ruth discussing evidence (Towse, 2011). She makes a distinction between what we know and what we don't know as a starting point to discuss evidence. I personally would add another layer, which is: what we think we know - we may probably not. And why is this important? Well, because of two elements that have already been cited. The first can be easily explained using the example of my discipline where we study what we call latent variables (e.g., Kline, 2005; DeVellis, 2012), i.e. something that is unobservable but is there, such as your level of attention as I speak. How could I possibly get to that? There are ways to get to that measurement; but it's not something that I can see. Well, there are clues. That's the first thing. For example, in the study of copyright I would be much more interested in answering the question: what's the perception of copyright law according to a consumer or policy maker. This is a typical organisational scientist type of question: the perception of something.

Experiments are one of the most powerful ways to replicate data, to gather evidence on a regular basis, if certain conditions are met.

The second point that is triggered by the question 'what we think we know', is replicability. One of the ways we all use as scientists to try and get a better understanding of reality - of evidence as a circuit of knowledge - is to be able to replicate whatever results as evidence. Now, experiments I think are one of the most powerful ways to replicate data, to gather evidence on a regular basis, if certain conditions are met. For example, if one wants to study whether price change of a particular good affects buying behaviour, it is unlikely that it could be done in a supermarket - although there are many people that do that. I would say, if you do try and conduct experiments in a supermarket, you never make sure that what really drives behaviour is price change; it is probably the colour, location in the aisle, other human beings buying the same thing - it's many things. So, if you want to conduct an experiment you've got to make sure that there is a significant control, and the only thing that varies is the one that you want to study. Otherwise you don't get to information that is clean enough to make judgements or to create data or evidence.

What is the advantage of testing hypotheses using experiments? Well, besides replicability, as I said, you can control settings. I used to say that applied psychologists are 'control freaks': they want to control everything in an experiment. And that is usually impossible in a social setting - you'll never control for everything. I remember in my former university they had an experiment room, which was basically a white cube with just a table and a chair. That was a perfect environment to conduct whatever experiment you want. They even had a window you could observe people from the outside without being seen. That's kind of scary; however, that's kind of the perfect layout for 'control freaks'. However, independently of how hard you try and even if you have the 'perfect' room, you can't of course have complete control. Even if you have that condition you'll never get perfectly clean data. Still, in experiments you can control for many different effects; much more than if you check for prices and behaviour in a supermarket, for example. So, you can clearly isolate effects; sometimes you can generalise whatever you find - there are many ways to control for that. So experiments allow for high replicability. Sometimes you may also find that there are quite immediate practical implications such as what to do to trigger particular organisational behaviours. I would say you can probably get some implications for policy too.

An example of this can be taken from a very popular book by Thaler and Sunstein (2008) the title is 'Nudge'; they use behavioural science to show how that may affect policy making. They also show some experiments and how you can use those to create policy recommendations. I think there's no way to think that copyright is an exception in that respect.

Here is one very short example of how experiments can be applied to copyright and law: me, Martin, Fabian Homberg, who's not here, and Dinusha (Secchi, Homberg, Mendis, Kretschmer, 2013), we conducted an experiment on orphan works to check for many different things. We were able to find that, for example, there is an effect of how much information is attached to an orphan work, a work where authorship is unknown. There's a difference between what people take or choose, whether they're exposed to music or photographs. So, information seems to be relevant in the case of pictures; but it's not relevant in the choice for music -people would partly trust their ears. Another point is that when we asked people to actually tested for the buying intention on the artefacts participants were exposed to, we found that the information wasn't really relevant in the understanding of how people decided to invest their money into a piece of music or another piece of art. So, information seemed not too private. But I would say experiments can be conducted in, for example, open source type of environments where you may want to test what really makes people contribute to an open source system.

Final thought I can put forward is what makes people violate, for example, copyright law. I think you can make some examples on that too.

Final sentence: I'm probably a big fan of experiments, but I understand that this is not the only way I would do for gathering evidence. I think there have been plenty of examples on this.

(PS) Thank you. The known knowns, the known unknowns and the unknown knowns and all the other combinations are on the agenda, as well as experimentation.

Really it's over to you now. You have to do some work. You've got a bit of space here to pose questions to the panel. The panel can pose questions to itself.

Questions & Answers

(Paul Heald) Paul Heald, University of Illinois and Bournemouth University. I was wondering if you might comment if there's any reason to make a distinction between micro and macro level investigations. I was thinking of Nicola; it seems to me two very distinct questions: how one orders the list or kidney transplant priority under the National Health Service rules, and whether a particular drug is efficacious in treating a particular kidney disease. One is a micro question; one is a macro question. I was just wondering if that distinction in levels of enquiry changes the way we should think about what constitutes evidence, the usefulness of it in policy making.

(PS) A medical question for you, Nicola. You're eminently well qualified!

(NS) It's interesting because I'm a micro-economist, so that is something that I come across in my own work, is how applicable is this to policy wide initiatives: can you say that based on these smaller case studies and smaller level of analysis and evidence that it should become more of a macroeconomic policy.

I think one of the big issues is cost. With some of the types of questions we want to ask it would be simply prohibitively costly to do it at a macro level; although I can appreciate that for a lot of things a macro level would perhaps provide a better understanding of what a particular policy is, what the incentives and impact of a particular policy is. Trade secrets, for example, is just so difficult to do anything on a macro level; it's just impossible. So, micro economists are saying a preference should be for a macro - but I think bigger numbers are always better in term of stats. It's not something that I think is easily answered because the costs make it prohibitively so, and in some cases we won't ever be able to answer those questions.

(PS) Before I go to Ruth, can I just ask any other members of the panel whether they have any thoughts on the question, macro versus micro or macro and micro? It depends on the question.

(TH) If you're interested in the interaction between IP system and growth then you're in the area of macro when you're trying to prove some linkages between some very important systems as well as the economy. But if you're interested in looking at what is the value of copyright in a particular industry then you're looking in a sectoral dimension. That's probably mainly where you do your IP work is within the sectors. It depends on the question.

(RT) I think you're confusing generalisation and macro level. What Nicola was talking about was generalisation. The distinction to me between micro and macro economics is that micro economics is about behaviour, and that has to be looked at in quite an individual way; not just an individual sector perhaps, but at how people actually behave. Now, you might need a lot of evidence to show that across sectors, but it's still a micro study. But macroeconomics is looking at broad aggregate of things. Traditionally if you looked at say Keynesian macroeconomics (Keynes, 1936) - Keynes of course was the introducer of modern macroeconomics - he believed strongly that there could be no macroeconomics without a microeconomic foundation. But that's all got lost since we've had Bank of England models and Treasury models of the economy and so on - and it won't take you very much imagination to see have not been particularly successful in recent times. I think that is very important to understand the behaviour element.

(NS) But isn't policy inherently macro?

(RT) No. If you're looking at how people behave in relation to say file sharing, you're looking at the micro level of people's incentives to do that, or disincentives not to do it. You might want to value it or not value it for the whole economy; but the essential part of it is why people do it in the first place.

(PS) You might just say there, Ruth, might you not, that it's the collective behaviour or the aggregation of individual behaviour which actually creates the perception of the problem in the case of file sharing?

(Lee Edwards) University of Leeds again. I'm just very conscious of the fact that there is a nod to qualitative data and the importance of qualitative data; but the emphasis still seems to be inevitably drawn back to quantitative approaches to define what good evidence is. I think there are a number of consequences to that that I think are important to bear in mind. First of all when you quantify something you treat it as part of a system; so there is an implicit assumption when we quantify evidence about copyright that copyright is part of a broader system that functions in some way and is predictable. And of course one of the big issues around copyright arguably is that it's not so predictable anymore - and we talked about that in lots of ways. That is one important consequence of that.

ESRC Evidence Symposium The other thing that happens is that when you emphasise quantitative data, and again we've alluded to this, it makes it very difficult for users to be heard in the quality making process because they cannot do so as a group; they don't do that. They're also quite difficult to identify as a group of people whom one can survey because you must fragment the broader population. And so implicitly when we say we like qualitative work and it's very interesting and it's very useful, perhaps it adds colour; but the quantitative data, the data is really important because it allows us to replicate etc - then by definition we do exclude the notion of copyright not as a system but as a lived experience which one users to decide to contravene copyright law or not. They are doing that as part of their daily lives; they're not doing it as part of a system. And we neglect that, I think, when we continue to emphasise the quantitative part.

(CH) I beg to differ. I think that qualitative and quantitative evidence should be complementary, quite clearly. I mean, this quantitative research, assume you ran a survey or we decide what secondary data to analyse we have to already anticipate; we will only find answers to the questions that we anticipate. And qualitative research, for example to pick up on new issues, unanticipated developments for example, I think anybody who thought about research methodology realises that qualitative research is extremely important, especially when addressing uncertain developments - it's just like radical innovation and so on.

So, I thoroughly agree with your notion that qualitative research is important. I don't think that favouring quantitative research or putting emphasis on that means that you disregard qualitative research. I think we are at a stage where we know some of the issues, some of the basic problems that exist. And now the question is: how do we develop one copyright policy or a limited number of variations of copyright to address a huge number of agents, firms, individuals operating under it. So, taking this step to a more generalisable result is, I think, inevitable; but it should ideally be based on an initial clear understanding of and repeated attention to more micro, micro qualitative case study evidence - I completely agree with that. I don't think the two are mutually exclusive.

(Lee Edwards) I just want to clarify the point I was making. I don't dispute the fact that people on the panel and elsewhere today who have spoken don't recognise the importance of qualitative research; but what I'm hearing is that there is an emphasis on quantitative research as the most valid evidence. And that's where I find the problem.

(PS) There seem to be quite obvious reasons for that, don't there? One of which would be that it's something you can point to, if you like, which assumes the guise of ineluctable fact. And perhaps one of the implications of what you're saying is that it makes the design of policies rather difficult if you don't capture, as it were, what lies beneath the surface: the things that people don't, in a sense, want to know about because they're too difficult to grasp. I think that's possibly where you're going with that question.

Other points?

(WP) I just think what the lady before was saying was absolutely spot on. I want to give an example of BBC iPlayer: you think about linear TV and radio woefully served by measurement data, bar the fact that they don't include students and they don't include migrants. I think there's quite a lot of students and migrants in the economy and they all consume radio and TV. How good is that for measurement, for example? So, then you go to iPlayer and you think this is the land of milk and honey: we've got all this granular data and we can really understand the interaction. And this professor at London Business School, I'll never forget how he explained the study to me. He was asking at a micro focus group level, 'How much iPlayer do you consume as a household family of an evening?' Two to three hours. Great. He carried the survey out over time; came back and says, 'How much iPlayer, TV, interactive TV do you consume in one evening?' Two to three hours. So, he got a bit suspicious. So, he filmed the families in their households and then asked the question again, 'How much do you consume?' Two to three hours. And the filming showed at best a half-hour show once a week. So, then he turned this around again and said, 'All right, watch yourselves on TV what you did last night. I'm going to ask you the question again: how much iPlayer did you consume?' And they said two to three hours. So, it just helps to illustrate how much people exaggerate the amount of digital media they actually consume when they're reporting back to surveys.

And no macro study could have revealed that; you've got to get it right down to micro touchy-feely focus group level to uncover those flaws. So, it's a huge issue.

On the macro side as well just remember as well when you aggregate it all up the bigger numbers are the better, and in DCMS categories of accreted industries as an umbrella you've got competition within sectors. So, you have advertising and you have music; and the GVA (Gross Value Added) contribution of advertising will grow if the cost of music rights and licensing falls. So, there's displacement within those sectored categories which often gets lost too. Which it's not; we're all in this together, we're all competing in this together as well. So, I think that's an important one too.

And the last one, just to build on Nicola's brilliant examples of why macroeconomics is bankrupt, and John Keene told me this last week: there's two economists in a field, and the field is surrounded by livestock, and a bull in the field turns around and starts chasing after these two economists. One economist opens up his laptop, and the other economist says, 'What the heck are you doing?' He says, 'Ah don't worry about it. I'm modelling the decisions that the bull has'. And he says, 'Well, the bull is coming to us really fast'. He says, 'Don't worry; the bull has got to model its decisions too'. It kind of captures the problem of macroeconomics.

(RJ) Similar question. You talk about doing experiments; how can you do experiments with copyright law? You can't. One of the big problems with law making is you can't do experiments?

(TH) I tell you that I can.

(RJ) Give me an example, the best example you can think of where you did a useful experiment in copyright.

(PS) Do you want to start this?

(Paul Heald) Sure. I just finished a study on audio books, refuting economists' claims that public domain audio books would be of low quality... the claim by the economists is that audio books made from public domain books would be of lower quality than those made from copyrighted books, and also that the low quality recording would diminish the value of the underlying work. So, you do an experiment: you have people listen to five-minute excerpts from public domain books and copyrighted books; you have controls; and you actually come up with numbers proving that in fact that the quality of audio books made from public domain works is actually slightly higher than copyrighted books.

It's research that is relevant to particular policy questions, underlying copyright law. Certainly it can be done, it's helpful.

(RJ) You can do experiments in measuring, I see that; but you can't experiment with the law. You can't change the law and say, 'Let's see what happens with that'.

(Paul Heald) Yes, but the law changes on...

(RJ) I'll just change it in Yorkshire.

(RT) But what about hanging? I mean, we're not hanging people anymore; that's...

(RJ) Well, if that was a viable experiment...

(Paul Heald) That's why it's a natural experiment; they're not paying you.

(TH) But Robin there are different experiments: natural experiments where you go out and try to observe, did something happen in one country where the law was x, against country where it was y. And then you have the lab - which I think is where the other idea is operating - where you create a situation and you ask students typically to come in, give them ten quid and say, 'Now we want you to play this game. Here are the rules.' And then you observed what they do. And you start to change certain control variables. In the game that we're doing we're looking at giving lottery prizes for people who donate to charity. You take part in a lottery; does that improve the incentive of a level of contribution.

(RJ) I see that.

(TH) So, it's that: you're playing with mechanisms to see whether it has an effect or not.

(JP) The EU is actually quite a nice natural experiment: there were 27 member states and one copyright directive that can be implemented in many different ways. This does give room - my plea for international surveys; there are natural differences in the way this directive has been implemented which could be used for actually gaining knowledge from this natural experiment.

(WP) But Robin's point relates also to something two panellists mentioned, which is Goodhart's Law (https://en.wikipedia.org/wiki/Goodhart's_law) of macroeconomics; which is when you've got so many moving variables, you target just one, and therefore that target becomes redundant. So, the Bank of England just targets inflation targeting, which is all it's trying to do. Exchange rate auditing, money supply, all of the factors that the Bank of England could have controlled start getting way out of line. And illustrating Ruth Towse's point there are 850 economists at the Bank of England in 2007 and not one was looking at asset price problems; they were all looking at inflation targeting. And that's why the controlled experiments have a problem, which is just: what are you controlling.

(MK) Sometimes you've got natural experiments which come close to controlled experiments. Term extension is a good one; we could look at what happens from now till next November while sound recordings still fall in the public domain - from 1962, next November no longer. So, that's one particular change which...

(RJ) That's not an experiment.

(MK) No, it is. You probably can assume that the change you will then observe in the market for that particular kind of recording can be attributed to the change in the law. What controlled experiments try to do is just isolate out as many of those factors as possible. You can get very valuable insights on the likely behavioural effects of policy interventions.

(Davide Secchi) Two points. One is that - well, I don't want to talk more about experiments, but - the closest thing to a piece of legislation or a rule in an organisational setting is a code of practice or a contract that employees sign with the company. There are tons of experiments in my field that study how people react to a change of a policy or to a change of the regulation or how they perceive it, are they more motivated to do the job at best or not, are they satisfied or not, etc. So...

(RJ) But those are the ones that you can't do, because if you change the policy that means really changing the law. And that happens to everybody.

(DS) But you can do thought experiments or you can simulate some sort of fake situation where there are rules that are not exactly the same rules that are out there, but they mimic whatever is out there so that you can control how people react to a sudden change in that or not. So, there are ways to get close to that.

Now, as I said, I'm not a copyright expert; but there are things that are usually tested in my field that may be close to legislation.

The second point I was thinking of is your example of the family and the iPlayer. I would say there are probably a couple of issues that I can think of. The first is that you're not really testing time allocated to that; you're testing their perception of it. And that was my point at the beginning. If you do that there's no such a thing as one item that will bring you the information; you need a bunch of items - four, five, six, I don't know - that will tell you what is a reliable measure or a reliable way for these people to understand what it is exactly that they are doing. So, I would say maybe there are two issues: one is what it is exactly that you're measuring; and secondly, how you're measuring it.

(CH) May I just jump in with one sentence on that point? The iPlayer example put forward by Will Page earlier illustrates that it's better to observe behaviour directly than to rely on reports of behaviour, recollections of behaviour. That has little to do with the relative merits of qualitative or quantitative research. So, the example didn't quite work. I would agree with the previous speaker that the quality of empirical work is decisive. We may also have to develop flexible solutions and triangulate different types of evidence.

ESRC Evidence Symposium (AP) The point I wanted to make briefly was in relation to a marvellous comment on the extension of the term of protection. I think it was a very good example of bad data and information in relation to research on the grounds that it completely forgot to look at audio-visual performance. And for the first time since the Treaty of Rome when asked to endorse a term of extension for performers they said, 'Well, actually we forgot about them because we only looked at the impact assessment of audio productions and not audio-visual productions'. So, we now have a situation where audio productions will be protected for 70 years and audio-visual productions will now be, from a performance perspective, only protected for 50 years. So, it was an interesting exercise in relation to both quantitative and qualitative research; which had simply assumed - the point I made earlier - that one question and one size doesn't necessarily all fit all.

I just want to pick up the point very quickly as well about macro and micro study, in relation to some figures I gave earlier. I made a point that we distributed £12 million to performers, and the average was £500 and the top one was £22,000 - and those are macro figures. How do you study the tens, 20s, 50 letters that you receive at the end of that from the grandmothers who received £50 saying: you've just allowed me to spend money on Christmas? When you're looking at what is evidence and how far you can go down to study that evidence it is very difficult in the macro and in the qualitative to actually capture the social impact. And I think that was part of the question in relation to what is evidence, what is good evidence. Actually economics and numbers are one thing; people are another. Unfortunately in this issue people are the issue, both in relation to users, providers, rights holders - and not always will fit into boxes when trying to look at the micro sides of it.

(RT) I agree with you. There are a couple of things one could say immediately. One is that we do have data on income distribution in the country; so if you find, as you said, from the survey of your members that they're earning so much then we can compare it to the national average and that tells you something. It doesn't tell you how the grandmother reacts, perhaps; but it does tell you something.

But there's another much more fundamental question lying around here, which is that social science which deals with human beings, whether in the economic sphere or in other spheres, social sphere and so on, has to generalise or else it can pack up and go home. Of course those generalisations are likely to be generalisations; I mean they do not to fit every person.

Speaking of anecdotes, one of my favourites is that you do some work and you provide some evidence and you tell somebody, 'I've found evidence of this' and they say, 'Oh yes, yes, well I don't believe your evidence because my grandmother said so and so'. I mean, if somebody can't tell the difference between finding generalised statements which may not hold for the individual case, and the validity of the anecdotal thing, there's something wrong. Social science isn't perfect - it's not like experimental work in medicine. Although by the way, some of your examples I'm afraid to say have been rather destroyed by a recent report on what people do to fiddle their data to get their results. But we have to generalise; and if we don't generalise we won't be able to do anything. And law is general as well, Robin. So, generalisation is essential to what we're doing.

(TH) I just want to make an observation about the comparison made with competition policy and IP. In competition policy we've had now for 20, 30 years a lot of economic evidence flowing into the process, into the procedures. And companies are forced to justify what they do or to defend themselves - and they need economic evidence to do that. I'm not sure we are at that stage within the world of intellectual property or in copyright. I think there is a role for government to go and argue and put pressure on companies: if you don't justify, if you don't show, we will take measure x. And I think that would generate more insight, more data and probably more economic research.

(MK) But for competition law the rules are already set. So, the evidence you ask to produce is to certain rules, which are given. So, it may then be about market definition; it may be about dominance. And therefore it's quite clear that it's a question of a quantitative nature relative to rules which are given. In copyright law we are not there at all, because the rule to which we are supposed to produce evidence is not there. And a good example, term extensions is a very good one, audio-visual performance is a very good example. If you agree with the policy aimed to reward performers, collective licensing is only one means; there are hundreds of others. You could provide incentives through the tax system; there's something you could do through unemployment benefits for periods when you don't work; you could have grants for which you apply - there are lots of other means. In order to justify using the copyright system for that particular aim, we need evidence for that.

(WP) Just back to that iPlayer example, I think qualitative versus quantitative my point is, backing up from what the lady from Leeds was saying, that professor from the business school managed to smell a rat, which is that everybody was exaggerating how much of their content their using. Droves and droves of qualitative and quantitative research, and there was an exaggeration in it. And the rat was: families don't sit around laptops; they sit around TV screens. And he was observing families; and he smelled the rat and exposed it. I just think you're better placed to smell a rat when you're dealing with humans than when you're dealing with numbers.

Another example is around this time last year OFCOM announcing some startling new research [https://consumers.ofcom.org.uk/] - I'm sure it's excellent qualitative, excellent quantitative, ticked all the boxes - that Facebook usage time was on the way down, therefore the social networking phenomenon that is Facebook is officially over. I smelled a rat. I raised my hand and said, 'Wait a second; the Facebook app is just being watched on the Smartphone, which means you can do more on Facebook with less time'. I kind of found out what all my friends were doing in six minutes as opposed to 16, and getting more utility through less time. And you declared Facebook was over a stage in front of a public audience.

And I just think you can smell rats better when dealing with humans.

(PS) Well, that's a very profound thought. I don't know who your friends are!

So, I think we have a one-minute wrap-up for each member of this panel. So, starting with Nicola; if there are any thoughts you wish to offer to the audience here before we go for tea or whatever it is we're having.

(NS) I guess one of the things I've been hearing a lot about, which is slightly introducing a new topic, is the knowledge exchange - which is a bit of a buzzword - but getting all of this evidence into practitioners and into policy is the bigger challenge too. I don't know where we're going with some of this. And it's the same thing that they've had in medicine for a long time. Interesting.

(PS) It's the new religion in academia.

(NS) It is.

(DS) You shouldn't ask economists to go through the million dollar questions; ask them for partial questions - that's safer.

(PS) Would you get more for each part of the answer than you would for one answer?

(DS) No, you would probably get a more reliable answer. I would say, together with quantitative and qualitative, I would add something in the middle, which is simulation data. We can discuss if that's evidence, although I don't like the word evidence; I talk about data usually. So, we can discuss if that data is useful or not. But right now I think there are very powerful simulation tools that have been used a little bit in bandwagon and innovation diffusion processes (e.g., Abrahamson and Rosenkopf, 1997), like agent based modelling (Gilbert, 2008). I think that's yet one more way to kind of test one's theory and then get out and test it for real with quantitative and qualitative methods.

(PS) War games.

(CH) If I weren't afraid to exceed my role here, I would now hold a referendum and ask how many people in the room agree that copyright policy is about innovation rather than shifting money from users to rights holders. I'd love to see the outcome and hear your opinion on that. I guess we don't have the scope for that, however.

(PS) I'll phone the judge!

(DS) It depends on the voting system now! So, we don't have time for that; I appreciate it.

(TH) I think in copyright we are data poor, and unless we change that we can't make progress as economists.

(PS) Well, there we are. Thank you very much to the panel for a very wide array of thoughts; sometimes conflicting, occasionally converging, always stimulating. So, give them a hand. Thank you very much.