It is a great pleasure and privilege to give the opening speech to this conference on innovation economics. I am grateful to King’s College and Concurrences for organising such an important conference and for assembling such a dazzling array of expertise and talent. I know that we are in for a fascinating and very instructive day. And I am relieved to be having the first word, rather than trying at the end of the day to make sense of all that we will hear. I note that today is billed as innovation economics for antitrust lawyers. But I suspect that the debate will be at least as much about law as economics, and rightly so.
Let me start with the obvious. Innovation is crucial to our long-run living standards and prosperity. It will also be critical to the long-run wellbeing of the environment and the planet. Avoiding catastrophic climate change while maintaining living standards requires major innovation in ways not yet thought of.
My perspective on the nature of innovation is shaped by modern economic growth theory and the work of Brian Arthur. Modern growth theory, so-called endogenous growth theory, emphasises the importance of knowledge spillovers from sector to sector and firm to firm. Many have contributed to this expansive literature – I and colleagues once made a modest contribution analysing spillovers between the developed and developing worlds to understand better the development process – but the pioneering work of Roemer was seminal. Brian Arthur’s classic, ‘The Nature of Technology’, emphasises what he calls recombinations – innovation very often involves not the invention and application of something new, but rather the combination in a novel way of elements that are already known and in use, often in different fields. Some used to argue that these spillovers and recombinations are easier within large dominant organisations. But all too often such organisations have a dominant culture that imposes a view and discourages diverse thinking and is closed to ideas from the outside. And so radical recombinations are more likely in open markets with interactions between diverse organisations and individuals. Open markets mean that very many more minds are applied to the challenge of recombination, and those many more minds bring with them many more and diverse ways of thinking about issues and many more and diverse experiences. That is why open effective competition is so important for promoting innovation. And the empirical evidence clearly shows this. It may be that atomistic, so-called perfect, competition (which rarely exists) is not the best structure for innovation, but nor is its opposite, monopoly or duopoly. Open, competitive markets really matter for our future.
That places competition policy firmly in the frame. That is especially so if Robert Gordon is right in arguing in his recent masterly book that fundamental innovations are in the past and that it is a mistake to see fundamental innovation in the current flurry of change . I don’t know whether he is right, but he is a very clever and learned man who has thought deeply about these issues over many decades, and it is true that productivity growth has slowed markedly in the US and European countries – one reason for the current UK government’s renewed emphasis on industrial policy. If Gordon is right, then that augurs badly for the future of our economies and our planet. And it is incumbent on us in the competition world to ensure that our policies and interventions promote, and certainly do not hinder, innovation, maximising our efforts to promote sustainable productivity growth. Hence the importance of the issues being discussed at this conference today.
Before diving into those issues, let me flag one set of issues that is important for this innovation agenda but which does not figure, or if it does at best undercover, on our agenda today. That is the important interaction between competition law and intellectual property (IP) law. IP law plays a key role in protecting the incentives to invent and invest. But we need to ensure that IP law is not used to exclude in an anti-competitive manner. The rise of Patent Assertion Entities (PAEs), sometimes known in more derogatory terms as patent trolls, may be just one example of this, leading some leading industrial economists to question the efficacy of patents in promoting innovation (Boldrin and Levine, Journal of Economic Perspectives, winter 2013). The Federal Trade Commission has recently analysed these entities, distinguishing Portfolio PAEs and Litigation PAEs, the latter group adopting practices seemingly deserving of the term troll. Importantly for our agenda, it found that 88% of patents held by PAEs were in the information and communications technology sectors, and more than 75% of these patents were software-related. This analysis has led the Federal Trade Commission to propose reforms in this area.
So turning now to competition policy, let me start with a challenge. Clearly for innovation what matters is dynamic competition – how the competitive landscape evolves through time. But unfortunately it is much harder to get good measures of dynamic competition. So we have all too readily fallen back on measures of static competition, which are easier to measure and therefore more comfortable to work with. So many cases rely on market structure and concentration, and that analysis is usually rather static in nature. And when the Competition and Markets Authority (CMA) estimates our impact we tend to quantify detriment in terms of static rather than dynamic losses, because estimates of dynamic benefits are hard to come by and justify. But perhaps we need to work harder. We need to shift away from our comfort zone. And to accomplish this will require adaptations by all parts of the competition institutional framework. Competition authorities will need to work harder with economists to enhance the emphasis on dynamic analysis. And the appeal bodies will need to recognise that these effects are crucial, and may have to accept that the analysis is not as crisp, clear and hugely evidenced but nonetheless has to be weighed appropriately in the light of its importance. We may all need to learn to be roughly right rather than exactly wrong.
We all naturally look online for the major fount of innovation. But that may well be wrong for at least 2 reasons. First, online will not be the answer to some very major issues, such as global warming, though it may be instrumental in reaching a solution. Second, there are very major, non-online technologies that we need to develop to secure our future. I think of the potential development of battery technology that will help to make intermittent green energy supplies part of the mainstream rather than a wayward child that forces the system operator to resort to filthy fossil fuel supplies when the going gets rough, which has happened recently in the UK. I am sure there are many other off-line technologies that will help shape our future and we need to encourage them. That is an important strand in the UK government’s emerging industrial strategy with its focus on infrastructure, and particularly the transportation, nuclear and life sciences sectors.
Having said that, online really matters and has become a major focus of competition work, certainly at the CMA. We have fined online sellers for price collusion on Amazon Marketplace, where relevant to today’s discussion the collusion was via an algorithm; we have clamped down on resale price maintenance online where we were concerned about consumer detriment; we have discouraged the use of wide most-favoured-nation or price parity provisions in areas such as motor insurance; we have issued a statement of objections to an online sales ban in golf club sales; and, and, and – I could go on. And that is just the competition caseload: in the consumer protection part of our portfolio we have addressed many other behaviours, including around online gambling and app-exploitation of minors, with severe consumer detriment and this intervention has led to changes in company practice not just in the UK but also Europe-wide. And we are undertaking a market study of digital comparison tools, including price comparison websites, because of their importance for consumers navigating the online world.
In addressing online issues, competition authorities around the world are not reinventing competition policy: we are relying on the well-established principles of competition law that apply equally to the online world as to the offline world. The challenge is not to redefine competition law, but rather to apply well-established principles to new circumstances. In this, the careful, evidence-based analysis in which we aim to excel is to the fore.
Let me now come to 2 major issues for competition authorities in the tech/internet space. First, these are typically markets in which network effects and economies of scale combine to mean that early winners can become seriously dominant. Especially in online markets early advantage based on competitive advantage can tip imperceptibly into dominance and long-term exclusion. At what point does a new entrant’s competitive behaviour shift from being a new, aggressive entrant strategy to being an abuse of market power? At what point should the competition authority move from tolerance to concern? These questions pose a dilemma for the competition authority: intervene too early and you suppress innovation; intervene too late and a dominant position is established that threatens open competition and innovation. There is a tipping point: intervention before that may be counter-productive, but intervening after tipping may be futile.
To take a topical example. If a group of individual suppliers decided to sign up to a shared-pricing arrangement, whether on their own initiative or encouraged by a co-ordinator, and then posted inflated prices when demand was unusually high, we might well think of this as a competition problem and seek to strike it down. Of course, we would need to be sure that such an arrangement was not a justifiable response to another countervailing power, and above all that there was some harm to consumers from the behaviour we observed. But if over time that group became too dominant in the market, then it may well be a concern for the competition authority. What I have described is sometimes called the ‘Uber dilemma’, though that is but one example. Certain taxi apps have entered the market and driven down prices in what was hitherto a generally highly regulated market, arguably regulated in some respects to the detriment of consumers. Their entry can be seen therefore as yielding significant consumer benefit, both on price and quality of service. And those who sign up early are clearly not engaging in any anti-competitive behaviour. But if Uber or indeed another supplier becomes dominant, is there a tipping point beyond which it is almost too late for the competition authority to intervene? In this example, the CMA has sought to intervene early to ensure that regulation does not stifle the innovation, and to promote platform competition by discouraging proposals that would have reinforced the network effect by preventing multi-homing by drivers. This approach may well be needed in other online markets with emerging platforms as they develop. And to be clear: we view taxi apps to be welcomed as a positive force for market opening, but we are alert to market developments that seek to shut down effective competition.
The second issue is this. Most tech companies, facing the competition authorities, argue that there is trade-off between static and dynamic competition. There can be such a trade-off – that after all is the rationale for patents, though I have noted the empirical literature that questions the benefits of patents for innovation. But it can well be the case that reduced static competition leads to less dynamic competition – that, after all, is the tipping issue. Our chief economist, Mike Walker, with Tony Curzon Price has argued that companies often seek to be in an area of both static and dynamic inefficiency, to the detriment of consumers, and giving an important role of preventing this to competition authorities. A possible example is Microsoft, which initially innovated with Windows to the huge benefit of consumers, but then reacted to the entry of Netscape by the anti-competitive move of bundling Explorer with Windows, blocking the potential for Netscape to become an important piece of middleware. Another are the ‘pay for delay’ cases, Servier and Lundbeck, pursued by the European Commission. And some argue that Google’s dominance in search is self-reinforcing because it gives it an unbeatable position in the data so essential to search, and this dominance might be used to restrict future innovation.
We start today’s conference with the major theme of big data, and we have had a preview of some of the issues in Andreas Mundt’s interview with Jorge Padilla: Can access to big data represent a barrier to entry? Could refusing a competitor access to data be anti-competitive? Is data a relevant market? Could data be classified as an essential facility? And how does competition law sit with, and interact with, privacy laws? Complex issues which I will not try to anticipate but instead look forward to the debate in the next session.
But there is one aspect of the debate over big data that I do want to touch on: that of the consumer’s access to their own data. We are all increasingly aware that, as we operate online, data on our actions are being amassed to very considerable commercial gain. What rights, if any, do we have, or should have, to our own data? Clearly privacy laws restrict how data are used, but whether with sufficient force is for debate. But would markets work better for consumers if they had access to, and even control over, their own data? The answer to this question is clearly relevant to the design of remedies where a breach of competition law is found – itself a very difficult matter to establish. It could also be relevant to the design of undertakings to allow a merger to go forward.
But to make a somewhat parochial point, in the very particular market regime that we have here in the UK, it is very relevant indeed to the design of remedies after our detailed analysis of the operation of a market. For example, the CMA recently concluded a major market inquiry into the retail banking sector in the UK. At the heart of the remedies that were put forward as a result of that inquiry is a common application protocol interface (API) that will provide a standard for digital interactions in this market. We believe this will allow the rise of much more effective competition online. The major problem in this market and in the UK energy market is a large number of inert consumers who are reluctant to devote much time and effort in searching for better deals and therefore end up paying a premium price. In both markets a different approach to the access to data may well provide the answer. In the energy market we already see intermediaries acting on behalf of consumers and searching for the best deal and effecting the switch on behalf of consumers – in effect a shift in market structure with the rise of an intermediary layer of businesses effected by a change in access to personal data. The inert consumer has to make just one effort to get off the sofa and sign up and can then recline once more. If this intermediary sector emerges, there will of course be the need to ensure that they are delivering what they claim. But if that was the regulatory issue, we would be in a much better place than now.
That takes me to the final issue that I would like to touch on in this brief opening – that is, the rise of algorithmic decision-making in the online world. Algorithms are everywhere – online bookings for airlines, hotels, etc; Amazon and other online platforms; bots buying up tickets in the primary market for shows and concerts, to name just a few. And we at the CMA, like many other competition and consumer agencies, have been intervening in these markets to ensure that the rise of algorithms works to enhance competition, not close it down.
But the rise of the algorithmic economy raises potentially difficult questions for competition policy, which Ezrachi and Stucke discuss in their excellent book ‘Virtual Competition’ (and I look forward to Maurice’s comments in the next session). And this may be one area where my earlier Panglossian statement – that the principles of competition law are alive and well and just need to be applied appropriately to different circumstances and evidence – may be questioned. Algorithms can provide a very effective way of almost instantly co-ordinating behaviour, possibly in an anti-competitive way. Where algorithms are designed by humans to do so, this is merely a new form of the old practice of price-fixing. But machine learning means that the algorithms may themselves learn that co-ordination is the best way to maximise longer-term business objectives. In that case, no human agent has planned the co-ordination. Does that represent a breach of competition law? Does the law stretch to cover sins of omission as well as sins of commission: the failure to build in sufficient constraints on algorithmic behaviour to ensure that the algorithm does not learn to adopt anti-competitive outcomes? And what if constraints are built in but they are inadequately designed, so that the very clever algorithm learns a way through the constraints? How far can the concept of human agency be stretched to cover these sorts of issues? I have suggested earlier that the competition tools at our disposal can tackle the competition issues that we face in the new digital world, but perhaps this last issue which I have touched on is one where this proposition is not true. I think we will touch on these questions today, but we will also be debating them for a long time to come. And if we do not find good answers, will that lead other jurisdictions to see merit in the powerful markets regime that we have in the UK, which would allow us to address questions like this through a different, perhaps more appropriate, set of tools? Bill Allan’s interview with Concurrences touches on these issues, and I look forward to his remarks in this afternoon’s session.
One concluding thought about the future. The father and son team, Susskind and Susskind, in their book ‘The Future of the Professions’ predict that machine learning will in the next decade or two transform the professions. Algorithms are already taking over the grunt-work in the legal profession. Most professionals say that professional judgement cannot be replaced by a machine, but is this right? Algorithms are increasingly outperforming medical specialists in diagnosis, for example. Could algorithms through learning come to outperform the judgements of competition specialists? Richard and Daniel Susskind suggest yes, but not yet. So I and my colleagues are safe and possibly my successor and her or his successor. But after that, Chairman Bot?
There is, however, one possible hope for us humans. We have seen the steady but spectacular improvement in algorithms, coming to beat humans first at chess, then Go and the latest at poker. Why not competition law and economics? However, there is some evidence that the best machines can be beaten by older generation machines working in partnership with a human specialist. Perhaps human specialists will continue to be useful provided they embrace the advances in machine learning. What competition agencies can be sure of is that those sitting at the other side of the table will have access to the very best machine learning in our field. We need to make sure that we are keeping abreast of this fast-moving and changing technology.
We will today have a very rich debate which may illuminate some of the issues I have highlighted and some key issues that I have overlooked. I greatly look forward to the day.
read more