Policy-makers' view on evidence for evidence-based policy

Speakers

Nick Munn (Deputy Director Copyright, IPO) – hereinafter (NM)
Pippa Hall (Economic Advisor, IPO) – hereinafter Pippa Hall)
Linda Humphries (International Adviser, IT Reform, Cabinet Office) – hereinafter (LH)
Chair: Prof. Martin Kretschmer – hereinafter (MK)

Questions & Answers

Richard Paterson (Head of Research and Scholarship, British Film Institute) – hereinafter (RP)
Robin Jacob (Professor, UCL) – hereinafter (RJ)
Will Page (Director, Spotify) – hereinafter (WP)
Ruth Towse (Professor, Bournemouth University) – hereinafter (RT)

(MK) Let me introduce our panel of three: Nick Munn, Deputy Copyright Director from the IP Office, Pippa Hall and Linda Humphries. Pippa, Economic Adviser at the IPO, and Linda Humphries from the Cabinet Office who worked on the open standards policy, which was just announced last week. The order of play is, we go from Nick, to Pippa, then to Linda; and I would ask the speakers to introduce themselves in their role, what they do, because for us today it matters greatly - in the evidence context - how you understand yourself, how you understand your function, and from what perspective you produce your material.

(NM) Thank you very much Martin, and thank you to Bournemouth for the opportunity to come here and contribute to what I think is going to be a very stimulating and, perhaps, challenging discussion, but also a very important one. And I thought - as, I suppose, some of you might call a policy maker, that is to say a civil servant who is a member of the policy profession and has been for about fifteen years - I might say a little bit about the policy part of evidence for copyright policy. I think that might help us set the context for some of the discussion that is going to follow.

'Policy work is about delivering change in the real world. Successful policy... depends on the development and use of a sound evidence base.'

So, what do I mean when I say, 'policy'? I'm going to fall back on what the policy profession itself thinks, so this is from a civil service website: 'Policy work is about delivering change in the real world. Successful policy,' it goes on to say elsewhere, 'depends on the development and use of a sound evidence base.' And on a couple of other things in fact, on understanding and managing the political context, and I'll come back to that, because that's actually quite important, and on planning on how the policy is delivered. It's important to stress that Whitehall doesn't have the monopoly on policy-making expertise, and Linda, in particular, will talk a bit about ways of policy-making which are more open than perhaps more traditional ones. But what does the use of evidence mean for the policy maker? Well, again, I'm drawing on the professional skills framework for the policy profession: 'The policy-maker understands the history of a policy area, seeks out rigorous evidence into new approaches, analyses the quality of available evidence and takes steps to mitigate gaps or weaknesses, uses a range of sources, including the front-line and customer insight, uses evidence to test and challenge assumptions and recommends a preferred option, based on thorough analysis of the evidence.' That's actually abstracted from a larger skills framework, not all of which is based on evidence and not all of which is solely about evidence or the other, but it just gives a flavour. All of which is well and good, but, what is evidence? And there are some competing ideas about what constitutes evidence. There's what you might call a modernist or scientific view of evidence: that evidence is, if you like, a set of data against which a set of hypotheses might be tested. There are different ways of expressing that, but that's the sort of core understanding of that. There are some legal ideas about admissibility of evidence, what counts as evidence and what doesn't. And there's also what you might call a post-modern idea of evidence, which is evidence is competing view of the world, the claims of which both content and interpretation have to be assessed against one another. So how do we deal with that? Coming back to the aim of policy work, which is about delivering change in the real world; which, of course, includes deciding what change might be delivered and what shouldn't and what priorities for change might be and so on; the aim's real solutions: so, policy makers and civil servants working in the policy process are to test possible solutions and problems against reality. And for that, the scientific method naturally lends itself. We are looking for falsifiable hypotheses, so Ruth's question earlier, 'Does copyright act as an incentive?' actually is an attempt to create a falsifiable hypothesis. Whether there is sufficient evidence to answer that definitively, I think, is still rather an open question, and I don't personally think there is a yes/no answer to that question - and I might come back to that. So, policy-makers are typically much more interested in the data against which hypotheses might be tested, than they are in the conclusions that people have drawn from that data. In particular, where people's conclusions obscure the information on which that conclusion is based, that makes it very difficult to assess in the context of other views of the world. And here we are, back in the post-modern approach again: the strength of the evidence, the validity of it, how it compares with other things. And, again, we are back to a need for predictive power. One of the key questions for policy is, 'Will it work?' and, even if you know that something has worked in another context, there are questions around 'Will it work here?' So, it works there, but why does it work? So there are questions of both the existence of an effect, if you like, and whether that is something that can be created, appropriated, made to happen in a particular context. And there are, inevitably, going to be a range of views about that. There is a corollary about that, which is, if you are coming up with evidence which is not addressing some of these questions, that actually it may be less relevant to policy making than you think it should be.

One of the key things for civil servants in this country, in particular, is that, because one is working for the government, the government's interpretative framework is the framework within which one is analysing... if the government views a certain kind of thing in a certain kind of way, then it is necessary for civil servants to relate to it.

I want to say a little bit about the avoidance of bias, which is one of the goals of academic evidence creation. It is also one of the goals, of course, of policy-making in the public sector, but one of the challenges to that is, again thinking a bit more post-modernly, that there is simply no neutral perspective or dispassionate interpretation. Textual scholars, for instance, would just say that's not available. So, looking a little outside copyright, in the hope to avoid controversy for several seconds, I look to the theologian, a guy called Walter Brueggemann (1993), who is a textual scholar. And he writes, 'Every text makes its claim, each such claim, however, requires attention that it be recognised, and understood, and weighted alongside other texts with other claims.' And, actually, although from a completely different field, that is not at all dissimilar to the kind of thinking that goes into the policy process, the weighing alongside other claims. Now, there is a question here of interpretative framework. I mentioned before political context. One of the key things for civil servants in this country, in particular, is that, because one is working for the government, the government's interpretative framework is the framework within which one is analysing - not uncritically - but there is something about if the government views a certain kind of thing in a certain kind of way, then it is necessary for civil servants to relate to it. So, to take a possibly relevant, but, hopefully, abstract example: if you have a government concerned with freedom, it will think about some questions, possibly including parody, differently from a government that is concerned more about the protection of people from bad things. Now, those are both perfectly valid things that you might want government to do, but those differences in emphasis might lead to differences in interpretation. Government does not, of itself, have a neutral framework and the framework that government has is largely a politically inherited one. Now, in copyright there are competing versions of interpretative framework ... There are, crudely, two paradigms that are fighting it out which could be characterised as 'more copyright' and 'less copyright'. There is a distinct view that says that copyright is a good thing, because of its incentive effect, therefore more copyright is better, and a contrary view which says, 'Actually, there are a lot of examples of copyright having a negative effect' and, therefore, less copyright is better. And the government's current paradigm is that neither of those is a sufficiently good explanation of what we see in the world of copyright. So, the government's current paradigm is, what one might call, a balanced paradigm, or an evaluating paradigm which says, 'Actually there are arguments being made that deserve investigation - both from the more copyright and less copyright camps'. And, just as copyright is not, of itself, regarded by economists as a first best solution, but as a second best solution, this seems to fit quite a lot of the general thinking about how copyright may or may not work.

ESRC Evidence Symposium So, what is evidence for copyright policy? Well, ideally, we'd like things with predictive power -so, data about copyright that can be used to form and test hypotheses about future behaviour, that appear likely to be predictive of actual behaviour and outcomes. And trying to get that is why the IPO published its guidelines about good evidence (IPO, 2011) - about which I think Pippa might say a little more in a bit. Failing that, evidence is, and will always be, also about the arguments made by particular interests for particular ends; and that's one of the reasons why we are very keen to know who is saying what, as well as what is being said. Again, there is no neutral interpretation. We are back to Brueggemann and post-modern textual theory. So, ultimately, evidence is that which can be relied upon in decisions about change in the real world. I just want to give one, brief, example of policy-making of evidence though - which isn't a government one - which I think is instructive about ways to deal with situations which are new and where evidence is hard to come by. It's an Australian example, from the Australian National University, which was founded about 1950. When they built it, they didn't build any paths between the buildings. They found out where the students walked and then put the paths there. And there is something about that natural experiment, finding out what people do, and then finding ways to make that normative and safe that is actually a very powerful thing, so the evidence from experience which is not always best obtained, as Ruth was hinting earlier, by econometric studies, is just as important in policy making as the ability to create numbers. Ideally, we'd like both. Which I think is probably my cue to hand over to Pippa who is good at numbers.

(Pippa Hall) I'm Pippa. I'm the economic adviser at the IPO who's been leading the Hargreaves implementation programme, so my role, as Martin has asked me to describe, is probably somewhere to bridge the gap between the academic evidence and the stakeholder evidence and then to formulate it into the policy and discuss with the policy-makers how it all fits together.

As Martin's already set out, we published the Good Evidence Guide (IPO, 2011), back in autumn last year, I think, setting out the key guidance on what we, in the economics and research and evidence team at the IPO, consider to be robust evidence that can be used in the development of policy. There's been a lot of work on what defines good evidence, and I'm sure, as today progresses, we will hear quite differing views as to what that is. So, we drew on a lot of this past work, and also our experience - seeing what has been submitted previously as evidence to consultations - and we came up with these standards of evidence that can help inform the development of policy.

I'm sure everyone here's skimmed the guide, so I won't use my allotted time to go into it in much detail, but what I will say, is that our objective is that all evidence that is used to inform public policy meets the three criteria. So, the first is that it is clear; it's in clear English so that everyone can understand it, all the assumptions are explicitly stated, it's clear who commissioned it and who paid for it, and it's clear what calculations were used and that all the raw data is provided so that everyone can see exactly how you got from A, to B, to C. Secondly, it is verifiable, so that the data is included. Now, that doesn't necessarily mean that the data needs to be made public, because I understand that in some circumstances, the data may be commercially sensitive, but it can be used in a controlled environment such as the Office for National Statistics (https://www.ons.gov.uk/ons/index.html) when we were doing the copyright investment figures (Goodridge, Haskel and Mitra-Kahn, 2012). And, finally, it's peer-reviewable. Now, again, this doesn't mean that it's been peer-reviewed. I appreciate that often the consultation deadlines are relatively short, so it's not always possible to do that, but it means that it can be peer reviewed in the future.

So, why are these three criteria important for us? When making policy decisions based on evidence, it's vital that we are able to make these decisions based on the most robust evidence available, and to be able to reflect accurately the limitations of the data and the evidence. This allows us to make informed policy that's going to have a real impact on society, so we really need to make sure that we have the best evidence out there and that the evidence can be challenged, built upon and acknowledged.

All evidence is important for government when we are designing policy and it's for us, as the analysts in government, to weigh up the robustness and the reliability of the evidence that's submitted to us.

I think one of the important points to make is that evidence doesn't always mean economics. I'm an economist, I would prefer that, but also that it can be social, scientific, legal, anecdotal and case studies. Often the case studies give us the most insight - why do people do things, what's the logic behind it. In an ideal world, case studies and the numbers would be perfect, but that's not always possible. All evidence is important for government when we are designing policy and it's for us, as the analysts in government, to weigh up the robustness and the reliability of the evidence that's submitted to us.

One of the objectives of the Good Evidence Guide was to give stakeholders an idea of the criteria that we use to weigh up this evidence and the criteria that need to be achieved if evidence is to have a real impact. But, it's important to highlight that each of the IP rights cannot be considered in isolation. We need to understand how each of the IP rights relate to each other, and that's both nationally and internationally, and I think all of this needs to be balanced with ensuring that the research that we do commission and carry out, whether stakeholders, government or academic, actually asks the right question, so what is the real issue we are looking at, and the research needs to be flexible enough to keep up with the fast-moving policy landscape. It's therefore really imperative that we all get together whether it's the stakeholders, the academics and the policy-makers to ensure that, early enough in the policy lifecycle, we are really identifying what the real issue is and making sure that the research we do answers that question. There's little point in producing technically brilliant research on the econometric basis, if it doesn't answer the question, because you're going to have no real impact on the policy then.

So, finally, for me, as a government economist, working in an area where the landscape is rapidly changing and where there is limited, and often conflicting, data, I welcome any attempt at coming up with new and verifiable data, as long as the limitations are clearly stated, all the shortcomings are acknowledged. I think that any new evidence can be built upon and we might as well start from a low base, and then we can build upon it. We certainly don't want to be in a place where little or no evidence is submitted because people are afraid that it may be rubbished, and the team really do welcome the opportunity to get involved in the early stages of scoping research, to make sure that academics, stakeholders, policy makers get the most out of that research. We all want to meet the same objective, which is to produce and deliver robust and independent research which has the ability to influence and impact on the development of national and international policy. So, we're all after the same thing.

(LH) Thanks, I'm Linda Humphries from the Cabinet Office, and as Martin mentioned, I've just run the Open Standards consultation (https://consultation.cabinetoffice.gov.uk/openstandards/) and delivered the final policy which was launched last week. What I'm not is an economist and I'm not a policy-maker, strangely enough. I have a role which is more related to technology, to technology strategy, particularly in gathering and sharing case studies, knowledge with other governments as well as our own, and my background really is communications, so, that just gives you flavour of the range of different people who work in policy-making in government as well, I think, because what I do have to do is rely on experts in the Cabinet Office, who are policy-makers and who are from an economics background or from a legal background and draw all of that together informing what is essentially a technology policy.

So, I just wanted to give you a little bit of an insight as to how I approached this particular problem, and I'll talk about it in a wider sense in terms of open policy making, which is something that is quite close to my heart, and is something which our Cabinet Office is also leading on.

The usual suspects, people who were used to talking to government... the large corporate organisations who were coming in to talk to us - and we weren't really hearing the other side of the story. And that's why we set out to run a public consultation, and to do it in a really, really, open way.

So, I started off with a problem that we had a stated commitment from government that we would have open standards in government IT. There are a lot of unknowns around that, about what it meant, about how it would be implemented, whether or how it could achieve the goals that we were hoping it would achieve, and we'd had quite a lot of experience of lobbying - people coming in to talk to us, arguing that we were doing the wrong thing, that what we were trying to do would essentially not meet the aims that we were setting out to achieve. We were very aware that what was happening was that it was the usual suspects, people who were used to talking to government, the people who were used to providing technology for the government - the large corporate organisations who were coming in to talk to us - and we weren't really hearing the other side of the story. And that's why we set out to run a public consultation, and to do it in a really, really, open way, so that we could have open discussions, we could try and get some kind of self-moderation in those discussions in what is well-known to be a very fraught and emotive area - in technology circles at least. Most people don't know anything about it, and don't need to, but I think one of the essential things we were trying to do was just to bring some sunlight into it, and to make sure that people understood the issues and it wasn't all based on hype.

So in terms of open policy-making, if you're not aware of it, there's an open policy initiative with an associated website which is www.openpolicy.demsoc.org, which the Cabinet Office and the Democratic Society are using to gather information and share knowledge about all things that relate to policy making including gathering evidence, what constitutes good evidence, but also in the approach that we use in talking to people, how we run these things, how we analyse what comes back and although these things are running in parallel, I was running my consultation at the same time as this debate was starting in public; we had a very similar ethos to start with, and that was that we need to make sure that we're reaching the right audience. We need to talk to people in ways that they understand, so the questions that we put in the consultation document weren't necessarily the questions we needed to ask in different environments. It was the same root of the question, but we needed to turn it around a little so that people understood what it was we were trying to draw out from them.

We ran it primarily online, and that, in itself, was quite an interesting experience. We wanted to make sure that the comments went up live, as people made them, so there was a little bit of moderation, but it was done very quickly, and it only stopped anything that was really the normal kind of moderation catch-all, in terms of profanity or if you were accusing a particular person of doing something, or advertising, it was that kind of flavour. And, essentially, what we wanted to do was get people to make those comments, for those comments to appear straight away and for their names to be against them, so that we could try and get some debate generated on line and a little bit of a flavour of whether people agreed with those statements or not. That tended to happen, because in a public consultation we have to not limit how people talk to us, and we had other channels on which they could talk to us as well and that included e-mail, letters, public round tables and, in some instances, submission of academic or professional articles as well.

ESRC Evidence Symposium The interesting thing that really came out from running it online is that a lot of people who were on the open side of the debate rather than those who were opposed to what we were trying to do, or had real issues with how we might be affecting their organisations or their businesses or their way of life, the open people went on line and the people who had got real issues with us decided that e-mail was the best way. So, that debate didn't really happen in the way that we expected it to. Where that really did happen is when we had public round tables. And they were a fascinating experience in themselves, we recorded all of them, and they are all online so that people can hear how the debate went and what the points were that were raised, and essentially all this open policy making activity is really supported by the new government principles on consultation that were published this summer, and also the civil service reform white paper, which essentially is encouraging policy-makers to be more open about the way in which we talk to people and the way that we listen. The term 'messy collaboration' is something that I've recently come across which I think is a Clay Shirky (www.shirky.com) term, I don't know if it's his or something that he's co-opted, but essentially what we found, particularly through the round table sessions, was that people were really influencing the way that we were running the consultation. The more we came across issues, the more we targeted areas where we thought we were missing views and we could then set up a new round table in a new part of the country, talk to different networks to try and find out who the people were that we needed to be talking to, because we just weren't reaching far enough. So, we ended up talking to open data communities in Manchester, to open source communities in Bristol. We had SMEs in London, we had one round table which was essentially entirely run over the telephone, because we needed to talk to voluntary and community organisations who just didn't have the resources to come and meet us, so we just wanted to plug every gap that we could, and evolve the way that we ran the evidence-gathering as we went. I think from doing that, we raised the profile of the work we were doing. More people got to hear about it, and we ended up with a guerrilla evidence-gathering activity, whereby someone out in the open source community actually set up their own version of our consultation, promoted that around their peers, and that evidence came back to us as a 'job lot' - here you go; these are the answers we've collected! Because they used our questions to do that, we were able to take that on board, it was evidence that fed into the consultation, was actually a substantial part of our evidence base in the end. And it was generating that kind of interest in the community, getting people in the community excited about what we were doing, getting them to understand the issues which would help to do that, and enabled them to talk in their own community what their thoughts were and then we get everything fed back to us.

To make sure that we had some rigour in how we were interpreting and analysing the evidence, we actually came to Martin. So, we commissioned from Bournemouth two reports [https://www.cabinetoffice.gov.uk/resource-library/open-standards-consultation-documents], one was a review of the evidence that was existing, that was out there, that could inform our thinking on the economic aspects, the legal aspects of the policy that we were proposing, also to look at some of the material that was coming in as evidence through the consultation as well, to take that on board. But, we also commissioned Bournemouth to do the analysis of the consultation responses, and there were a couple of important things that came up through that, I think. One was that we thought it was essential to publish the methodology as well, so that people understood how we treated it, and that they understood how we allocated typologies to respondents, that kind of thing, so that they could have disagreed with us if they want to, they could see how we'd worked everything out, and that forms part of the analysis in our thinking when we're making the policy. The other thing that was really interesting that was actually brought up by the researcher who did the analysis is that because we hadn't started off with a closed set of multiple choice questions, we hadn't modelled our world view, I can't remember the technical term for it.

(MK) Grounded theory

(LH) She used grounded theory to flip that on its head, find out the flavour of the themes that were coming through in the responses and then allocated the responses to each one of those self-selected multiple choice answers, if you like. So, it meant that we didn't force people to choose something; they actually came to us with as many creative ideas as they possibly could, and it was kind of up to us to work out what the answer was from that. And I think it was quite a refreshing way to deal with this particular issue, because there could be one or two gems out there that would be missed, if we just gave a static view of the world. I'm not sure it would work in every instance, but for us I think it was really useful.

That was an excellent mix of perspectives and quite typical of the different approaches to evidence. One, the economist, takes a look at the nature of evidence, what are the features which make good evidence; the other perspective looks at the process, good evidence arrives through an appropriate process.

I just wanted to touch on a couple of the issues that came up whilst we were doing this. As any informed evidence-gathering policy maker, I was challenged during the consultation process to think about how we were going to treat responses that came to us that were just a sentence that was a very emotional response to what we were doing. You've got to do this; it's the right thing to do. How do you deal with that, compared to a 40-page, peer reviewed academic paper? And what we thought we needed was to weight the responses in some way, but taking advice from Bournemouth, we knew that was absolutely the wrong thing to do because it is not a tool that's used in that way. So what we did instead was, we had the quantitative side where we counted out how many people were supporting what we were doing, but we also had the qualitative stuff that delved into what the points were that were being brought up and we considered each of those points, either as a summarised selection, or, if they didn't fit into our summary, we considered the ones that were outliers as well. So we looked at all the ideas that came up and responded to those, as well as having a flavour for what the mood was, if you like, in response to each of the questions that was posed.

ESRC Evidence Symposium Just one other point, I want to bring up is the burden of consultation. I think we were asking some really challenging questions, and I think it's really, really important to recognise not to set the bar too high when we're trying to gather evidence, because a lot of the people we needed to talk to would be frightened off - we had one person come along to the stakeholder round tables who ran a pet shop. Never in my wildest dreams would I have thought she would be interested in this policy, she had some really, really interesting and valid points to make, and that goes back to the case study idea that, unless we understand what the barriers are for people trying to use this technology, we can't look at what the causes of those barriers are, and we need to understand how what is happening currently is affecting people so that we can come up with ways of avoiding it.

(MK) Thank you. I think that was an excellent mix of perspectives and quite typical of the different approaches to evidence. One, the economist, takes a look at the nature of evidence, what are the features which make good evidence; the other perspective looks at the process, good evidence arrives through an appropriate process. So, they are very different starting points for producing evidence, and I think it's great to have them on the panel in this form. Nick Munn, as any good civil servant, managed to say very little ... But I think that's the skill of a civil servant in a way.

So we have got time for a number of questions. Richard Paterson...

Questions & Answers

(RP) I'm from the British Film Institute.

(MK) In the same spirit as we have run the whole event, would you say what your function is, what do you do.

(RP) Oh, my title is Head of Research and Scholarship, and I am also quite heavily involved in IPO conversations at the minute. Obviously to Gowers (2006) and Hargreaves (2011) we submitted a lot of evidence, sorry views.

ESRC Evidence Symposium I think you were very unfair to Nick, because I do think if you are buried in the political process, the policy process, what he was saying is actually very valid and basically I would ask Nick a very simple question, 'how are power relationships involved in the terms of reference for reviewing policy?' I'd also be interested to hear what Pippa thinks about this. The Gowers review was started by Gordon Brown, that is by the Treasury, not by the IPO. The Hargreaves review was started by David Cameron, the Prime Minister, not by the IPO. Now, why did it happen this way? It's an interesting question, I think, and relates back to what Nick was saying, there were a lot of underlying political factors involved. But who is expected to implement the Review conclusions? It's the IPO; and who, then, stands in the way? - the lobby groups with a range of interests. And the example, I would give is the Digital Economy Bill, where the orphan works and extended collective licensing clauses got dumped in the wash-up at the end of the last government. Various lobby interests came into play.

So, the question I'm going to ask is, 'How do you weigh the evidence?' I think Pippa touched on that, How do you weigh the evidence in the political process, because I get the sense that evidence from the Motion Picture Association, the US Studios, probably has greater weight than the evidence from a small SME. I would contradict that slightly by the example of Stop43 (www.stop43.org.uk), which is the photographers' lobby group that was set up to oppose the orphan work provision in the Digital Economy Bill - and arguably were quite instrumental is stopping that clause being introduced into law. So given the complexity of factors how do you weigh the evidence?

(MK) Who wants to take this; any of you?

In terms of weighing up evidence I would really hope that as a government we don't just listen to the people who can pay for the best research and who have the loudest voice

(Pippa Hall) Shall I? I think in terms of weighing up evidence I would really hope that as a government we don't just listen to the people who can pay for the best research and who have the loudest voice, so I wouldn't automatically say that that is the case. The point about trying to weigh up the evidence is trying to look at actually what the real economic or the real case study is actually saying behind the emotive spin that may be put on it. So I think, when we weigh it up, it's looking at what the actual story is telling us and what the evidence is showing rather than the emotive spin on it, and trying to weigh it in that sense. It is by looking at it and making sure the numbers stack up and that there isn't a spin on them and that it does apply across the board. I think as a government economist I do try hard to make sure that the evidence the SMEs give us isn't just disregarded because it hasn't got an academic name on the front and really try and look at it, and see how we can put it into the evidence.

(NM) There are two things I'd like to say. One is that Linda has described a particular process of trying to make sure all voices with an interest are heard, and we understand that consultation is burdensome for anyone who takes the time and trouble to get involved, but IPO and, indeed, the Gowers and Hargreaves reviews worked pretty hard to try and get a range of views, not just the most readily available to try and understand the whole landscape. The other thing I'd point back to, in context, is, when I talked about the factors in shaping policy, one of the ones that we acknowledge as the policy profession in government is the political context, and some of what you've described would be things I think that operate as I think you were saying in the political realm, not necessarily in the realm of evidence. That's not to say there's no evidence for those views, but some of that activity would fall outside the strict purview of evidence. It's not treatable logically; rationally, you can understand those views are there, but they don't necessarily relate terribly well to what the government's actually proposed in some cases.

(RJ) I have a question, what is the difference between evidence ...

(MK) Robin, would you introduce yourself?

(RJ) Robin Jacob, Professor at UCL. What is the difference between evidence and lobbying? I'm about to tell you. I don't think there is any difference. And is there a difference between good lobbying and bad lobbying?

(Pippa Hall) I think lobbying for me and the difference between lobbying and evidence is that lobbying is the spin put on the evidence. So, when I'm talking about evidence, I'm talking about actually looking at the data, or the example that you're going to use if we take the case study. And, actually, just putting the data out there and the evidence so that everyone can decide what the conclusion is, whereas lobbying is taking that evidence and putting your own spin on it to lobby the government. So, there have been a number of research projects recently where partners that you wouldn't necessarily expect to get together have done pieces of research, where they have just put the evidence out there, and they haven't tried to put the political spin on it, and I think that's the difference for me, as the economist.

Lobbying often features quite heavily inference from information, and it may also be aimed at the political sphere... lobbynomics.

(NM) I think there's some truth in that, that lobbying often features quite heavily inference from information, and it may also be aimed at the political sphere at least as much as at the evidence sphere. I am sure that this is one of those irregular nouns that I provide evidence, he or she lobbies, so and so is indulging in lobbynomics. It's one of those things that no one ever likes to think of themselves as lobbying when they are presenting a point of view that they sincerely and passionately hold, and that's true of many of the people that we come across in talking about copyright.

(WP) Will Page, Spotify. Just to come back on Robin's ...

(MK) Can you say what your role at Spotify is, that's the rules of the game today.

(WP) OK, Director at Spotify

(MK) With a brief for what, sorry, it's important to know.

(WP) To bring economic insight to Spotify's business. So, just with Robin's point, there is an assumption to your question, which is lobbying is simply bad, and evidence is therefore good, and that comes out in the answers from the panel, but you could reverse that logic by looking at, take DG Competition (European Commission's Directorate General for Competition) or any competition authority for example, that has to resolve the case, and that case happens to be narrowly defined. The competition authority says 'I want to resolve that case', for all the best evidence in the world, having a narrow definition of the case you are trying to solve narrows the overall objective of that firm or the complaint or the issue at hand. Lobbying may be required to broaden the lenses and realise that you can solve this narrow problem but you just send the problem elsewhere. So I just want to give an example where it's not necessarily the case that evidence is good - lobbying is bad, evidence in the best laid plans of mice and men may not solve the overall problem and lobbying could actually raise broader awareness, and that could be a multi-national firm, that could be a charity, lobbying to say the case doesn't capture the problem.

(NM) I'd go back to the paradigmatic thing I was talking about, that part of evidence is about providing the paradigm of interpretation; that what you're putting forward is not neutral; you are actually putting forward context at the same time as you are putting forward what are likely to be the observables from the real world, if you like. I'm from the 'unsocial' sciences, so I like to think of observables as a good physical scientist. So, I don't think that the articulation of perception about a wrongness of focus is necessarily lobbying, but lobbying may well be used in order to make that point. And that's a valid part of the political system, what it isn't is strictly within the purview of evidence, although evidence may very well be involved, indeed one likes to think that it will be.

(RJ) I see what Martin's exercise was, was genuine attempt to collect evidence, measuring something. He will recognise has its limitations. It is limited to YouTube, pop music; limited in all sorts of ways; but it was a genuine exercise in asking does this matter or not? And seeing the results, and there was nobody who had any views on what the results were going to be, that, I understand is the kind of evidence which a scientist would call evidence. Most of the other things are not, and one of the problems that I see is that, unless somebody has a point of view they want to get across; they're not going to respond to any of this stuff at all.

(RT) Let me just say very quickly, the first evidence or the first data - I'll call it that - on the creative industries came from the United States, being measured by Stephen Siwek (Siwek, 2011) who's a very good economist, and financed by the Intellectual Property Alliance of America, and they continue to do the same thing, and of course, because people like my Society for Economic Research on Copyright Issues (www.serci.org) have weighed in on questions such as where do you get these data, and what are the assumptions behind it, which are often very hard to find out because the national income accounts have been very poor on this area, but people regard this as very firm evidence. It's in every government policy document that I've seen. The creative industries are growing at 5% or 8%, and you have to do an awful lot work to show that that is not the case, or that it is doubtful. Casting doubt is an important part, and I think it would be very hard, for however well-resourced and excellent your work, it would be hard to reveal all of that. And, I may say, I had a go at some of your own figures which I didn't think were also very easy to ...

(RJ) That £2.2 billion (Hargreaves, 2011) at the beginning of Hargreaves?

(RT) Yes

(MK) Well, I'm sure that the GDP debate will resurface again.