The elephants in the room

The elephants in the room


Market regulation may be as old as civilisation, but it does not stand still. Over the last 20 years, economic analysis has moved centre stage in antitrust investigations, with an ever-growing toolkit and ever richer data. But the assumption that this would turn competition policy into a technocratic backwater, visited only by specialists, has proved to be wrong. It is back on the political agenda, with politicians as well as economists and regulators worrying about its entanglement with globalisation and the digital economy, and the impact of these 21st century market developments on inequalities of wealth and income. So has the trend towards detailed analysis blinded the competition policy “establishment” to these tectonic shifts? And if competition policy wants to remain relevant for the next 20 years, how does it need to change?

If you thought antitrust was a peculiarly modern phenomenon, think again. The first recorded competition policy rules are almost 4,000 years old. King Hammurabi of Babylon included them on a list of almost 300 laws carved on the giant stone tablet now in the Louvre in Paris.

Fair trading was clearly taken very seriously: if a purchaser of property couldn’t produce a witness to the transaction, he would be deemed a thief and put to death. Other Babylonian rules also set the first recorded price caps – detailed regulations on the charges asked for the hire of oxen, wagons and cargo boats.

Some 2,000 years later, the Roman emperor Diocletian imposed price caps on over 1,000 products. The list of products included lions – with the price capped at a whopping 150,000 denarii (perhaps they were found to hold a dominant position).

Diocletian competition policy seems to have worked well in the short term, preventing various exploitative practices such as the buying up of everyday goods to create artificial scarcity and drive up prices. But it quickly started to do more harm than good. Inflation meant that whole towns were unable to produce goods to sell below the price caps, or to trade competitively at all.

Merchants either traded illegally or went in for barter. Regulation, in short, killed the market, and by the end of Diocletian’s reign, only four years after the Edict on Maximum Prices had been issued, it had to be abandoned. Even so, it took the new emperor Constantine several years to stabilise the Roman economy.

The first recorded competition policy rules are almost 4,000 years old






History is studded with such attempts to control market manipulation or inflation by capping prices; but other regulations designed to prevent anti-competitive practices were also in place in northern Europe before the first millennium of the Common Era. In  ngland, the Domesday Book makes mention of the sanctions against “foresteel” – buying up goods before they come to market – that had existed in the time of Edward the Confessor.

For many centuries, however, monopolies were not so much banned as licensed, providing useful government revenue. It was not until the times of Adam Smith and John Stuart Mill that monopoly or cartel behaviour – which Smith opined would occur every time a group of traders got together, even if only for “merriment” – began to be seen as an affront to society.

The association of free trade with human liberty spawned new laws prohibiting trade restraints in the eighteenth and nineteenth centuries. French competition law has its roots in a statute passed in 1791 – just two years after the revolution – abolishing the guilds and merchant corporations. But the first major landmark of modern competition policy was the Sherman Antitrust Act, passed by the US Congress in 1890.

The passage of this law, and the Interstate Commerce Act a few years previously, was a response to growing public resentment at the monopoly and cartel actions (“trusts”) of industrial corporations, and an expectation that federal government should take  responsibility for controlling these. The act was and remained contentious, with as recent and respected a critic as the economist Alan Greenspan, Chairman of the Federal Reserve for nearly 20 years until 2006, questioning its impact on enterprise and innovation.

But in fact, the only significant use of the act in its first ten years was against a trade union  rather than a corporation: the union’s backing of a railway strike was deemed to be a  restraint of trade” under the act. (Indeed, this potential use of the act had been influential in getting it through Congress, which was not wholly convinced of the need to restrain big business.) It was not until the 20th century, when supporting legislation was passed  outlawing a wider range of anti-competitive practices, that America’s ever-growing  corporations really began to feel the heat.

What the Sherman Act unquestionably did do, however, was to land competition policy in the hands of the legal profession. With the Department of Justice (DoJ) charged with bringing actions, and/or plaintiffs needing lawyers to make their case for damages, the  role of economists was secondary at most. It was only with the creation of the Federal Trade Commission in 1915, with its strong economics division (now the Bureau of  Economics), that the economic impact of competition policy began to be properly debated. But what did economists have to offer?

Over time, they were to develop important roles as expert witnesses in courtroom dramas, where the lawyers might be debating when a certain merger might result in “market dominance” within the meaning of whichever laws governed the jurisdiction in question. But just as forensic scientists do not write criminal law, economists needed to offer more to help make competition policy itself.

As the Sherman Act showed, modern competition law had its roots in notions of “fairness” rather than economic impact. As late as the beginning of the 1960s, the findings in the famous Brown Shoe case emphasised the importance of preserving small, local companies. It concerned a merger that would probably not attract five minutes of regulatory attention today, yet it set ground rules for market definition that endured for decades. So what has changed since, and what part has economics had to play in the evolution of policy?


To begin with: why was competition thought worth protecting? Well, because one of the first things an economics student learns in the study of markets is that competition stimulates innovation and efficiency, and drives down prices. Competition, in short, was a “good thing” because it increased economic efficiency and consumer welfare; and if it led to the collapse of inefficient firms, this was a necessary and even desirable outcome.

The problem with classical economics,however, was that its image of the desirable outcome that would maximise consumer welfarewas a fantasy world of “perfect  competition” – with many different fully informed customers choosing between the offerings of many different equally well-informed producers. Little in the real world came even close to that perfect state, making it small use as a basis for policy. Fortunately, modern economic ideas of competition and markets arrived in time to fill the gap.

An early milestone was Friedrich von Hayek’s 1946 lecture, The Meaning of Competition. Originally better known for his work on monetary theory, for which he shared a Nobel Prize in 1974, it is his work on markets that has had greater shelf-life. In this lecture he argued that the assumptions of perfect competition were so restrictive as to make any application of it to the real world utterly meaningless. It was a “static” analysis, while:

...competition is by its nature a dynamic process whose essential characteristics are assumed away by the assumptions underlying static analysis.

The world was full of imperfections; what made markets so brilliant (and necessary) was their ability to iron out and work around those imperfections. And the prices arrived at in free markets would be powerful economic signals.

For Hayek, competition was a process of discovery: a well-functioning market was one that was effective at reacting to, and re-organising itself in the face of, a constantly changing  and imperfect world. One could never, in such an uncertain world, simply point to what a competitive outcome should look like and regulate to achieve it. (Memo to Diocletian: we just don’t know what the right price is for a lion.)

A heavy blow to faith in free markets was delivered by the financial crisis, and a shift to more direct intervention by regulators followed (even, in the UK, including price caps). But Hayek’s concept still lives on, as witnessed by the definition of competition used by the UK’s Competition and Markets Authority (CMA) in its market investigation guidelines:

…Competition is a process of rivalry as firms seek to win customers’ business. It creates incentives for firms to meet the existing and future needs of customers as effectively and efficiently as possible – by cutting prices, increasing output, improving quality or variety, or introducing new and better products, often through innovation; supplying the products customers want rewards firms with a greater share of sales.

It is a perspective that makes it clear that we cannot be sure that a market isn’t working just by looking at the outcomes. We need to understand the way in which the market got there, and where it might go next. We need, in short, to focus on the dynamic, not the  static. And it was that need to get under the bonnet of a market before intervening that gave economics a pivotal role in the development and enforcement of modern competition policy.


Hayek may have published his work on competition in the 1940s, but it took a long time for economic analysis to truly take its place at the heart of antitrust enforcement. By the 1990s, industrial economics was being increasingly widely deployed in the assessment of mergers and abuses of dominance in the US and Europe, but there was a lack of  consistency, in both the quality and depth of analysis and the regulatory framework around it.

In merger investigations, for example, the critical question for US investigators was whether a tie-up between two firms would lead to a “substantial lessening of competition” (SLC). By contrast, for the European Commission (EC) the question was whether it would “create  or strengthen a dominant position as a result of which effective competition would be  significantly impeded”. This led a number of European investigations down rabbit holes of lengthy debate about whether the firms in question were “dominant”. Arguably, it also encouraged a focus on the structure of the market, rather than a Hayekian analysis of the impact of the merger on the process of competition.

These issues came to a head in Europe nearly 20 years ago, just as Frontier was getting  going. In 2002, the European Court of First Instance annulled three EC merger prohibition decisions in swift succession. The Court’s judgements were blisteringly critical of the EC’s (mis)use of economic theory and its assessment of the evidence.

In overturning its decision to prohibit the merger of package holiday companies Airtours and First Choice, for example, the Court identified “errors, omissions and inconsistencies of the utmost gravity”. Among other things, the Court found that the Commission had misapplied the economic theory of tacit collusion, and had erred in its assessment of the ability of smaller  competing tour operators to compete with the merged entity and the ability and willingness of customers to switch away from the merged entity in the event that it sought to restrict capacity and increase prices.

These decisions prompted a good deal of soul-searching in Brussels, followed by a series of significant changes to the framework used to investigate mergers. This included the introduction of a “significant impediment of effective competition” (SIEC) test that was  much more closely aligned with the SLC test used in the US. The EC also committed itself to an:

…across-the-board increase in the economic expertise in our case teams and…the capacity for more rigorous testing of the economic models we apply in our investigations.

This commitment manifested itself in two ways. First, the EC introduced new guidelines on a range of aspects of antitrust law – including merger assessments and enforcement against abuses of a dominant position – which made it clear that economic analysis would play a central role in shaping the Commission’s investigation process and resulting decisions. At the same time, the EC created the Office of the Chief Competition Economist, soon containing 30 or so competition economists involved in the more contentious aspects of much of the EC’s competition work.

The original function of the office was to stimulate debate – and at times internal challenge – within the EC, with the aim of ensuring that its antitrust decisions were couched in more rigorous thinking about the process of competition. In practice, the office has played an even more central role than this, frequently spearheading much of the investigation process.

In the decade and a half since the reforms, economic analysis has cemented its position at the heart of antitrust investigations on both sides of the Atlantic. The analysis of mergers provides a case in point. In the past (and especially in Europe prior to the replacement of the “dominance” test), much of the focus of merger assessment was on “defining” markets.

The idea was that you first needed to define the competitive boundaries of the economic activities that the merging firms were engaged in (rather like draping a boundary rope round a cricket field). Any businesses whose economic activities fell within this boundary relevant competitors, whereas any businesses outside it were, in effect, ignored. The next steps were to calculate the market shares of the merging firms within this roped-in market and thereby to identify the impact of the proposed merger on market concentration. Mergers that led to a material increase in market concentration would be frowned upon.

From a “clear bright lines” perspective, this approach had much to recommend it. The list of competitors could be neatly defined and tidy rules could be drawn up about threshold levels of market concentration that would trigger concerns. But splitting a colourful spectrum of firms into a black-and-white list of “competitors” and “non-competitors” was essentially an artificial exercise – just about OK for an initial first-pass analysis, but not for an in-depth assessment.

And from a Hayekian angle, the approach fell well short of the mark. Market definition is an inherently static concept: it marks out fixed boundaries that do not help understanding of the dynamic forces that shape the long-term evolution of competition. This may not be a major problem in long-established markets, where the same firms have been playing the same roles for years, but it definitely is a problem in fast-growing industries where innovation is the watchword. Similarly, market concentration (the twin sister of market definition) is, at best, a rough and preliminary indicator of market power and – as we shall discuss later – can be seriously misleading in markets characterised by “winner-takes-all” competition and/or low barriers to entry.

However, few in-depth merger investigations conducted in the US or by the EC were quite so simplistic by the early 2000s. In practice, some consideration would usually be given to “out-of-market” constraints and the potential for new firms to enter and expand. And in some cases, the merging parties would make detailed economic submissions that would act as a catalyst for a more rigorous investigation. Nonetheless, the framework did little to ensure a consistently rigorous approach – a weakness brutally exposed by the Court of First Instance’s rulings in the early 2000s.

And meanwhile, economists had begun to search seriously for new toolkits for merger analysis, with academics based in the US leading the charge. This resulted, among other things, in a family of techniques that focused on estimating the “upward pricing pressure” that a merger could bring about, without the need to fixate on market definition or simplistic measures of market concentration. With these techniques, economists look at empirical evidence on the closeness of competition between merging parties, in order to estimate how far they are preventing one another from increasing prices; and they combine this analysis with evidence on the merging parties’ profit margins, in order to estimate how far a merged entity would benefit from increasing prices once its merger removed the pressure of competition between its two elements.

This “upward pricing pressure” toolkit has now been widely deployed in a number of jurisdictions. In the US, the benefits of the framework are directly espoused in the horizontal merger guidelines of the DoJ and the Federal Trade Commission. And while the EC’s guidelines do not mention upward pricing pressure techniques directly, the toolkit has played a prominent role in informing the EC’s thinking on a number of high-profile merger
investigations, particularly in the telecoms sector.

At the level of the individual EU member state, the take-up of these techniques has been less consistent. Some national competition authorities, such as the UK’s CMA, have made a lot of use of these tools, whereas others, such as Germany’s Bundeskartellamt, for now remain fonder of the traditional market definition approach. In Asia, upward pricing pressure tools have similarly played a more marginal role to date, though China’s antitrust watchdog, MOFCOM, has taken account of such analysis in recent merger clearance  decisions.

In practice, market definition and market concentration analysis continue to play a role in most horizontal merger investigations, albeit more often as an initial screening device than the centrepiece of the analytical framework. And it is debatable how far the new tools really improve on the old approach. They may move considerations of the closeness of competition centre stage, but they make a number of implicit assumptions about how competition works that are challenged rather less often than they should be (for example, they typically assume a specific form of price competition and a certain shape of consumer demand). Above all, they still essentially view competition through a static “classical” lens and so fail Hayek’s call to think about competition as a dynamic process of innovation and discovery.

But the improved toolkit has brought other benefits. For example, the shift in approach from market definition to closeness of competition has arguably paved the way for more sophisticated analysis of non-horizontal mergers. For these transactions, the potential competition concerns involve the merged firm leveraging pre-existing market power in one area to build market power elsewhere: a vertical merger, for example, might allow a monopoly producer of an essential input to foreclose downstream rivals by refusing to supply this input on competitive terms. Or, following a conglomerate merger, a firm could make the purchase of a popular product conditional on the purchase of a less popular one that customers might otherwise have bought from a rival producer.

It is impossible to assess the severity of these non-horizontal competition risks simply by defining the relevant markets and calculating market shares: such an approach might, at best, provide some insight into whether the merged party in question would be able to foreclose rivals, but it would reveal nothing about whether it would in reality have any incentive to do so. After all, pursuing such a strategy would involve costs for the merged business in question as well as benefits, since it would be restricting its own sales in one market in an effort to cripple its rivals elsewhere.

Fortunately, the techniques that economists had developed to assess incentives to increase prices following horizontal mergers could also readily be adapted to aid thinking about the equivalent questions in non-horizontal cases. As a result, competition authorities have become increasingly comfortable with lifting the bonnet on vertical theories of harm that they may otherwise have avoided. An analysis of merger investigation decisions issued by the EC over the past decade bears this out. As Figure 1.1 illustrates, there has been an upward drift in the number of references to “vertical” and “non-horizontal” issues in such decisions.



In the US, there are signs of a similar trend, albeit from a different starting point: until recently the prevailing orthodoxy there was that vertical mergers were at worst harmless and frequently pro-competitive, reflecting the influence of laissez-faire “Chicago school” thinking. But the new tools available to competition authorities for analysing vertical mergers appear to have emboldened the competition watchdogs. In 2017 the DoJ broke with decades of US antitrust precedent in bringing a lawsuit against AT&T’s $85 billion acquisition of Time Warner on the basis of vertical concerns. The DoJ’s worry was that AT&T would have the ability and incentive to foreclose rival video streaming services by withholding or impairing access to “must-have” Time Warner video content. The DoJ’s lawsuit against the companies was ultimately thrown out by the courts – suggesting that the DoJ’s ambitions to take a more aggressive stance on vertical mergers may have been knocked back, at least for now. Nonetheless, the case resulted in an intense level of scrutiny on important vertical issues that would, until recently, have been almost unthinkable in the US.

However, the increasing centrality of economic analysis in competition investigations has come at a cost, in terms of clarity as well as cash. It is not only lawyers who may miss the days when enforcement had clearer bright lines, involved simpler and shorter investigations, and when making risk assessments based on precedent was considerably easier. The financial cost to firms of competition inquiries has ballooned. Regulators complain, not without some reason, of an arms race, in which companies spend ever more on more detailed and comprehensive analysis. But the authorities contribute to the race with ever greater use of their information-gathering powers. The result is that inquiries often become overlong and overcomplicated.

Competition economists, like those at Frontier, have played an increasing role in the vast set-piece battles of competition inquiries and merger clearances. The biggest ever takeover occurred just as Frontier was being launched – the $183 billion merger between Vodafone and Mannesmann. With deals of this size, the costs of regulatory clearance seem almost trivial. But cumulatively, the costs of regulatory inquiries into a target industry may be substantial.

And the introduction of new economic techniques has had another unhappy by-product. Economics may be playing a more central role in antitrust investigations across a wide range of jurisdictions, but different competition authorities started from different positions and have progressed at different rates. Consequently, the arguments that might satisfy MOFCOM in China remain different from those that would be most persuasive to Bundeskartellamt in Germany, and different again from the types of evidence now sought by the DoJ in the United States or the EC in Brussels. Even if most competition authorities are heading in the same direction, the different starting points and rates of travel may create even more work for businesses for the foreseeable future.


The cost and complexity of competition investigations may be a nuisance for businesses and regulators, but until recently the sense was that this was a side-effect of a system that was generally serving an important purpose well – a matter for the industry practitioners to consider, but not a dilemma to keep political leaders awake at night. The impression that the modus operandi of competition policy was largely settled was created by over a decade of steady incremental development. It came to be taken for granted that competition authorities – a little like central banks – were one of those techy bits of government best left to independent experts.

But like central bankers, competition authorities have found that backwaters can still be flooded by events. And competition policy is back on the political agenda, with assaults on its uses and abuses from all sides. In the US, there are growing concerns that competition policy has been unable to prevent rising levels of market concentration and inequality. These voices have found echoes in Europe, with the economic ministers of France and  Germany going as far as suggesting that current competition policy has allowed China to take unfair advantage of globalisation.

Three huge – and interlinked – forces have combined to catapult competition policy back to political prominence: the rise of the “intangible” asset, globalisation and the financial crisis and its aftermath.


Investment in intangible assets (ideas and software) began to outstrip investment in tangibles (hardware) in the US as far back as the 1990s. Now much of Europe is following suit. In Capitalism Without Capital, published in 2017, Professor Jonathan Haskel, of Imperial College London, and Stian Westlake point out how the economic characteristics of intangible assets create a shift in favour of winner-takes-all competition in the global marketplace.

Professor David Autor, of the Massachusetts Institute of Technology, and others have done much empirical work demonstrating the extent to which markets have been shifting in favour of a smaller number of “superstar firms” – not just in the tech sector, but as Figure 1.2 shows, across a broad swathe of “traditional” industries, ranging from manufacturing to utilities and transportation.

A reduction in the intensity of competition resulting from a greater concentration of market power is evidently a potential threat to consumers. But in the US, the debate has been given an added edge by arguments linking globalisation and the increasing levels of market concentration to the second factor: an increase in the inequality of wealth and income.


At the international level, the picture is more complex: while debate continues about the extent of the shifts, it seems generally agreed that if the world’s poorest have not gained much, rapid growth in poorer countries has created a new “middle class”, whose incomes have risen while those of many in rich countries have stagnated. But all analyses seem to agree on the sharp increase in the share of the richest 1%.

The link with the development of the new economy is debatable, but the gains enjoyed by its leaders are evident, as they use their innovative ideas (or intangible assets) to generate wealth on a global scale. The competition for their skills is intense – one reason why earnings at the leading edge of technological progress have risen dramatically since 2000, while wages at other firms have flatlined.

The third factor – the global financial crisis, and the subsequent recession – has left ordinary citizens in Western economies with a still greater sense that markets are not working for them. Slow or no growth in labour productivity throughout most of the economy has depressed real wages and the labour share of GDP. With ever-larger profits in the hands of an ever-smaller group of mega-corporations, so the argument goes, the world may be entering a new “Gilded Age” of byzantine trusts and robber barons. For Vanderbilt and Rockefeller, should we now read Bezos and Musk?

In political debate, distinction is rarely made between “good” profits (generated by the successful exploitation of innovation) and “bad” profits (generated by the exploitation of market power). The same emotional elision is made between “big” and “bad”, as it has been from the early days of competition policy: Louis Brandeis, the associate judge of the Supreme Court who urged the creation of the Federal Trade Commission, wrote passionately of The Curse of Bigness, a theme revived by the school of New Brandeisians today.

Economists struggling to maintain the argument that the key test is not size but consumer welfare find themselves faced with the deceptively simple question: if competition is the key economic mechanism through which the benefits of capitalism are disseminated, how or why has this failed to happen?

But unease about the focus of competition policy is growing in Europe too





Among policy-makers and politicians, the responses to these pressures have been conflicting and even self-contradictory. In the US, there is the anti-competitive protectionist rhetoric – and actions – of President Trump, conflicting with his occasional assaults on those Big Tech corporations whom he has found less than supportive. But among both populists and policy-wonks alike, there is talk of the need for another “trust-busting” era – an assault on the homegrown monopolies of the new economy.

It’s not often that discussions of the Herfindahl-Hirschman Index – a decidedly unsexy measure of market concentration – make it all the way to the Oval Office. But in 2016, they did just that, when a report by the White House Council of Economic Advisers prompted President Obama to issue an Executive Order calling on federal departments and agencies to identify actions to promote more competitive markets.

The case for the prosecution against the competition authorities was most powerfully made by a then-almost-unknown law student, Lina M. Khan, in the Yale Law Review in 2017. In “Amazon’s Antitrust Paradox”, she argued that:

…the current framework in antitrust – specifically its pegging competition to “consumer welfare,” defined as short-term price effects – is unequipped to capture the architecture of market power in the modern economy… Specifically, current doctrine  underappreciates the risk of predatory pricing and how integration across distinct business lines may prove anticompetitive.

A legal celebrity almost overnight, Lina Khan’s critique has provoked reaction from a number of prominent US scholars and lawyers such as Daniel Crane and Herbert  Hovenkamp. The latter has warned that if “superstar” companies are targeted simply because their low prices hurt competitors, we may “quickly drive the economy back into the Stone Age, imposing hysterical costs on everyone”. But Khan has struck a chord with policy-makers.

Senator Elizabeth Warren, who in 2019 declared herself one of the Democratic hopefuls for the next presidential election, has already published her manifesto on the subject, zeroing in on the Big Tech companies that feature prominently on the list of global superstar firms. She argues that:

…As these companies have grown larger and more powerful, they have used their resources and control over the way we use the Internet to squash small businesses and innovation.

Senator Warren blames weak antitrust enforcement and promises dramatic remedial action – all neatly captured for the digital age under the hashtag #BreakUpBigTech. Even the Chicago school – famous for its free market economics – has turned its mind to the question of whether the US has “a monopoly problem” and what should be done about it. And a growing band of New Brandeisians argue strongly for a more wide-ranging antitrust regime.

A group of a dozen attorneys general from different states have proposed to the FTC inquiry into “Competition and Consumer Protection in the 21st Century” (launched late in 2018) that the balance of competition law should change. Its focus on “price effects that challenged conduct may have on the consumer” should also extend to “conduct with harmful effects on innovation and quality” and it should seek to “protect the competitive process, for the ultimate benefit of consumers”.

In Europe, challenge has been slower, but the debate is beginning to be equally stormy. And again it is Big Tech that is emerging as the main lightning rod for criticism. There is concern about the power of American tech giants, socially and economically, as well as (belated) recognition that still greater economic influence may be wielded by Far Eastern competitors in the future.

There is also, slightly contradictorily, recognition of the importance of scale for effective competition in the increasingly integrated global marketplace, with some voices in France, Germany and some other countries calling for the creation of national or European “champions” that can compete on level terms with the Wild West. The EC’s ongoing study of market concentration will no doubt fuel this debate.

For the present, however, the EC remains focused on competition rather than competitiveness – as illustrated by its controversial recent decision to prohibit Siemens’ proposed acquisition of Alstom. In the UK, the same debate erupts periodically, and Brexit has undoubtedly sharpened it – with the Brexiteers radically divided between free marketeers and protectionists. At least for the present, faith in the economic value of competition remains embedded in the UK regulatory system, and indeed several sector regulators (such as those for financial services) have only recently had competition added to their statutory objectives.

But unease about the focus of competition policy is growing in Europe too. How can  competition authorities make a better job of policing markets? How has the digital revolution changed the name of the game? And are we fighting a 21st century battle with 20th century thinking?

France and Germany are calling for the reform of competition policy to take the impact of globalisation further into account. They argue that the EC has a static view of geographic markets, which ignores the future integration of these markets, which firms need to anticipate if they are to remain competitive. Meanwhile, in February 2019 the chairman of the UK’s CMA, which after Brexit will necessarily have a bigger role as a wholly independent competition authority, wrote to his sponsoring department complaining that we have “an analogue system of competition and consumer law in a digital age”.

Lord Tyrie counselled his secretary of state to ignore the likely “opposition from many parts of the competition establishment” to his request to kick away a fundamental constraint on competition policy – proposing that his organisation should no longer make “interventions based on competition alone” but should take a wider view of “consumer detriment”. He may have been partly motivated by a desire to pre-empt criticism of the CMA by the expert panel appointed in 2018 by the chancellor of the exchequer, under the chairmanship of President Obama’s former chief economist, Professor James Furman. Professor Furman had been requested to recommend changes in the competition framework to “unblock digital competition”. But this letter was a sign of the times.


The new economy presents governments with a range of different challenges





As we have seen, this is not the first occasion on which competition policy has been  deemed inadequate or expected to serve a broader purpose than economic efficiency. Such diversions have, however, often been unhappy, not least because the politicisation of competition policy easily leads to tit-for-tat protectionism.

The new economy presents governments with a range of different challenges. But governments have tools other than competition policy with which to address issues such as income inequality or social disadvantage, and good governance is best achieved by using the right tools for the job.

A wider debate is taking place about the appropriate nature and extent of supervision of online markets. In April 2019 the UK Government, for example, announced plans to appoint a new independent regulator with powers to enforce a “duty of care” on online platforms, with the aim of making the UK “the safest place in the world” to be online. Whether or not a proliferation of regulators makes sense, at least it shows recognition that it would have been inappropriate to land on competition authorities the responsibility of ruling on what material on social media encouraged self-harm, child abuse or racism.

Of course there will be issues in the new economy (as in the old) where different policy objectives and different regulators are in conflict. In the US, there has been much criticism of Europe’s General Data Protection Regulations (brought into force in 2018, and presided over by information commissioners) – notably by Commerce Secretary Wilbur Ross, but also by a range of economists who argue that the regulations are anti-competitive: that they have inhibited innovation and tended to entrench the dominance of those market-leading firms that currently hold the most information on consumers.

But at the same time, the FTC has faced criticism for the weakness of US privacy protection, and the Cambridge Analytica scandal escalated anxiety about the use of personal information. In the end, the trade-off between society’s desire to protect personal data and the economic benefit it might enjoy from a free-for-all needs to be resolved at a level above that of a mere competition authority.

So in general, arguments about a new role for competition authorities should be taken more seriously when they suggest ways of better doing the day job than when they suggest night jobs for them, unrelated to competition. They are unlikely to do the latter well, while they have to be able to go on demonstrating that they can do the former.

Which brings us to the nub of the question: what are the weaknesses in the way competition authorities are going about things now? And/or do features of the new economy – in particular the digital markets at its heart – require a rethink?



As with many problems, it helps to focus on the causes rather than the symptoms. According to Professor Haskel, the issues raised by “capitalism without capital” stem from the speed with which ideas can be replicated and propagated. In the traditional world of tangible assets, it takes time to build scale: if you want to double your capacity to produce steel, you will need to spend years building a new steel plant. But when the most important competitive assets are ideas, they can be copied and scaled rapidly.

This means that businesses based on intangibles can grow very rapidly and, potentially, build a commanding market lead before their competitors have got out of bed. In digital  markets this is exacerbated by a second factor: network effects. Many of the world’s leading digital services are, in effect, platforms whose main function is to bring consumers together with other consumers (e.g. social or professional networks) or with businesses looking to sell them goods or services (e.g. hotel, takeaway or taxi service platforms).

The attraction of these platforms to consumers and businesses alike increases with the number of users. People are unlikely to be interested in a social network that none of their friends have joined, or a restaurant booking platform with none of the restaurants they like to eat at. However, these beneficial network effects can also lead markets to “tip” towards a single platform, resulting in markets in which one competitor attracts almost all consumers and businesses.

However, a shift to winner-takes-all competition (and the increase in market concentration that may accompany this) need not imply a reduction in competitive pressure. After all, the greater the prize, the stronger the incentive to fight for it. And while businesses built on fast-moving intangible ideas can rapidly build up an impressive lead, they are by the same token vulnerable to other businesses with newer, even better, ideas.

The first two decades of the new millennium have been rife with examples of new challengers demolishing the apparently unassailable leading position of industry titans. And the trend has continued since the financial crisis. Of the ten largest firms (by market capitalisation) in 2009, only two were still on the list in 2018. As the old adage tells us, the higher you rise, the harder you fall.

But that degree of churn may prove to be exceptional, reflecting transition to the new economy. The top corporate names on the leaderboard in the early 2000s were still financial firms and oil giants, as compared with today’s technology superstars. Moreover, many of today’s leading Big Tech firms are not (or not only) originators of ideas that may be competitively vulnerable to the next big thing, but providers of digital platforms for ideas, and so may facilitate the next big thing, or have a yes/no say as to whether the next big thing arrives.

In an environment in which investment in intangible ideas is increasingly outstripping investment in physical assets, the arenas in which these new ideas can be brought to market may have more enduring value than the ideas themselves. The world’s leading digital platforms may be simultaneously innovation accelerators and competition bottlenecks, bringing the best new technologies to the world’s consumers or blocking their access to them.

There are therefore four linked characteristics of digital markets that create headaches for competition authorities:

  • The strong role innovation plays in the competitive process combined with the speed with which new ideas can take hold, leading to potentially very dynamic markets whose future evolution is harder to predict on the basis of past performance.
  • The accumulation of intangible assets that are the source of both strong competitive advantages and consumer benefits. Giving rivals access to these assets in order to strengthen competition without diluting the incentives to innovate, and the accompanying consumer benefits,
    is not straightforward.
  • The market position of platforms in particular, with strong network effects, and economies of scale and scope that are hard to replicate, for which it is difficult to devise workable access schemes that both strengthen competition and protect incentives to invest in them.
  • The dual role of platforms as vehicles that can accelerate the growth and success of businesses offering innovative new services, and as gatekeepers that could, potentially, use their position to undermine these businesses as competitors.


So how well equipped is competition policy to address these challenges? First, the good news. Many of the analytical frameworks that economists have developed for addressing competition in traditional markets remain – at heart – every bit as relevant to assessing competition in digital markets. In merger assessment, for example, the shift away from focusing on market concentration to thinking more directly about the closeness of competition between the merging parties means that competition authorities are better equipped to think about competition in the winner-takesall environments that characterise many digital markets.

In such markets, concentration will almost inevitably be high, irrespective of the level of competitive pressure that firms actually face. What really matters is whether the merging firms regard one another as an existential threat in their struggle for the market, or alternatively whether – by merging – they would stand a better chance of taking the battle to the current market leader. Similarly, the analytical tools that competition authorities now routinely use to explore the impact of vertical mergers have given them a useful start to thinking about the relationship between digital platforms and the businesses that may rely on them.

In one important respect, however, the toolkit is struggling. The analytical frameworks and sources of evidence that competition authorities use are overwhelmingly geared towards assessing competition through the lens of classical economics, with its assumption that competition is an essentially static phenomenon. Hayek’s call for competition to be recognised as a fundamentally dynamic process has still not adequately been answered. The problem may have been worrying economists for decades, but the fluidity of digital markets and the critical role that innovation has played in driving competition have brought this into sharp focus.

The way in which economists use data in antitrust investigations is a case in point. Economists like to think they have become increasingly good at measuring, describing and predicting consumer behaviour. The “demand side” of retail markets is now something for which competition economists have a heaving toolbox. You want to know whether (and to what extent) private-label granola is a substitute for branded granola? No problem – there are at least five types of economic analysis in the toolbox. And at least five increasing levels of sophistication to the econometric models that could help get you to the answer.

But an obvious weakness of data is that it is necessarily historic. To use it to reach conclusions about the future, or even the present, requires an assumption that the world doesn’t change. That would seem to fall well short of the Hayekian understanding of competition as a dynamic process.

In practice, the assumption that the future will be like the past does not work too badly when examining the demand side of a market. Consumers are, generally, creatures of habit, predictable even in the way they respond to changes in fashion, as behavioural economics has helped to demonstrate.

Historically, it has also seemed reasonable to assume that nothing on the supply side will change very fast, either. And in many mature markets this is true. In a world where companies’ main assets are production facilities that take years of investment and reputation-building to establish, rapid change is not to be expected. But that presumption really starts to creak when the authorities focus on sectors of the economy where innovation is relentless, and where firms’ ability to scale rapidly using digital technology makes for a highly dynamic supply side.

Despite these challenges, competition regulators have been far from idle





Despite these challenges, competition regulators have been far from idle. Competition authorities in Europe in particular have taken action against what they have seen as abusive behaviour by the GAFAMs (Google, Amazon, Facebook, Apple and Microsoft), pinpointing a range of behaviours aimed at entrenching their positions or extending their reach into new areas of business with “killer” acquisitions:

  • Since 2010, the EC has fined Google’s parent, Alphabet, a total of €8.25 billion with respect to three findings of abuse of dominance. The EC found that Google Search gave preferential treatment to Google’s own comparison shopping service; that Google required manufacturers to pre-install Google Search on smartphones using Google’s Android operating system; and more recently (in March 2019) that Google imposed a number of restrictive clauses in contracts with third-party websites which prevented their rivals from placing their search adverts on these websites.
  • In February 2019, the Bundeskartellamt ordered Facebook to obtain the consent of users in Germany before combining their user data with the user data of other apps, including WhatsApp and Instagram.
  • Under pressure from the EC and German competition authorities, Amazon and Apple agreed in 2017 to end the decade-long audiobook exclusivity deal that followed the purchase of Audible. Amazon has also come under investigation by the Bundeskartellamt in connection with price parity clauses. And in September 2018, the EC opened an investigation into Amazon’s use of merchant data.
  • Most importantly for Amazon, it has begun to come under more hostile scrutiny in the US. While the competition authorities almost waved through its acquisition of Whole Foods in 2017, and have so far found little to criticise in the company’s market behaviour, the realisation that Amazon now facilitates over half of all e-commerce in the US has brought criticism down on the heads of the regulators themselves.
  • As for Microsoft, in the US competition authorities started making inquiries as far back as the 1990s, while Europe has had several bites at the cherry. In 2004, the EC ruled that Microsoft had abused its market dominance and had to allow non-Microsoft servers to work Windows computers and services. In February 2008, the EC fined Microsoft nearly €900 million for charging “unreasonable” royalty fees. Another €561 million fine was imposed in 2013, for failing to comply with the EC’s ruling that it had to allow users easier choice of a web browser.

However, mergers and acquisitions are conspicuous by their absence from this list of enforcement activities. On one reading, there is little to see: it is notable that Figure 1.3, which lists the world’s 20 biggest mergers over the past 20 years, includes none of the GAFAMs.

But some have questioned whether the competition authorities should have been more alive to digital markets mergers that have sneaked under the radar. The debate over acquisitions made by the GAFAMs illustrates the thorny questions that these trends are creating for antitrust watchdogs. The charge is not that they are allowing killer whales to gang up: it is that they are permitting shoals of small fish to disappear through “killer acquisitions”, whereby large tech firms buy up small promising rivals in order to avert the threat of future competition.

“Killer acquisitions” are not a new concern – they were raised as an issue in the pharmaceuticals sector, for example, long before the focus turned to Big Tech. But unlike in pharma, the acquisitions by large digital platforms often do not target companies offering similar competing products. More often the targets will offer a materially different service, which may be competing with the services of the acquired firm, but which could also be reasonably considered to be a complementary or even independent service.

When Instagram was acquired by Facebook, for example, it had 30 million users and no revenues. Only five years later, Instagram had 600 million users and the prospect of being a multi-billion-dollar enterprise in its own right. Some have interpreted this as a blemish on the record of competition authorities – that in allowing the acquisition of Instagram to “slip through the net”, they permitted the world’s leading social media platform to take over a would-be challenger.

But it was far from clear at the time of the acquisition that Instagram would develop as an alternative to Facebook, rather than as a service that consumers used for a different purpose. Similarly, it remains unclear whether Instagram would have reached hundreds of millions of users were it not for the investment, expertise and access to markets provided by Facebook.

While considering the dynamic aspects of competition in these markets has become ever more important, it is important to be sure that this does not translate into a lowering of the standards of proof or the application of a priori assumptions about that evolution of the market. The analysis must continue to be guided by the evidence available. If not, competition authority decisions risk doing more harm than good.




Of course, it helps to keep the recent controversies in perspective: as King Hammurabi’s stone tablets and Diocletian’s price caps remind us, intense political interest in competition policy is nothing new. But new thinking is undoubtedly required to address the challenges posed by the distinctive dynamics of competition in digital markets. And economists need to play their part, taking the criticisms on the chin, rather than adopting the comfortable role of naysayers.

Recognising this challenge, competition authorities in all major jurisdictions have begun investing significantly in learning more about specific features of these markets. They have been recruiting technical specialists and commissioning experts in the new markets. In the spring of 2019, three special advisers to the European Competition Commissioner, Jacques Crémer, Yves-Alexandre de Montjoye and Heike Schweitzer, published a report into competition policy for the digital era.

The report called for evolution rather than revolution, concluding that there was no need to rethink the fundamental goals of competition law, and struck a broadly optimistic note. However, a number of the authors’ proposals are likely to prove controversial (and, arguably, somewhat closer to the “revolutionary” end of the spectrum of regulatory reforms than the authors suggested). While the report made a wide range of recommendations, a common theme underpinning many was that the standard of proof in regulating digital platforms should be changed, in essence shifting from a permissive presumption of “innocent until proven guilty” to a markedly more sceptical “guilty until proven otherwise”. For example, the authors proposed that when considering the actions of leading digital platforms:

…one may want to err on the side of disallowing potentially anticompetitive conducts, and impose on the incumbent the burden of proof for showing the pro-competitiveness of its conduct… and that

…even where consumer harm cannot be precisely measured, strategies employed by dominant platforms aimed at reducing the competitive pressure they face should be forbidden in the absence of clearly documented consumer welfare gains.

Similarly, in his final report on digital competition policy in the UK in March 2019, Professor Furman opined that:

…The biggest missing set of policies are ones that would actively help foster competition. Instead of just relying on traditional competition tools, the UK should take a forward-looking approach that creates and enforces a clear set of rules to limit anti-competitive actions by the most significant digital platforms while also reducing structural barriers that currently hinder effective competition.

In his view, merger control:

…needs to become more active with an approach that is more forward-looking and more focused on innovation and the overall economic impact of mergers. Even with clearer ex ante rules, ex post antitrust enforcement will remain an important backstop – but it needs to be conducted in a faster and more effective manner for the benefit of all of the parties.

The report argued that the authorities should impose specific measures on designated strategic digital platforms; that their decision criteria should include not only how likely a merger is to reduce competition, but also the likely magnitude of such an impact; and that interim measures should be used more frequently to safeguard consumers from potential harm while competition investigations were underway.

These proposals by Crémer et al and Furman to shift the standard of proof and introduce interim measures are understandable, but not without their risks. They reduce the risk of Type II errors (failing to pick up damaging anti-competitive conduct, or failing to do so until the damage is already done) but by the same token they increase the risk of Type I errors (picking up too many possible problems, and so delaying and discouraging new services or practices that would deliver benefits for consumers). These proposals to slow down and regulate the behaviour of leading platforms are likely to give fuel to critics who argue that too much “red tape” has already resulted in Europe lagging behind its competitors in the digital innovation stakes. (As many have pointed out, it is notable that all of the GAFAMs originated in America, not Europe.)

So if Frontier were advising a modern King Hammurabi (the EU Competition Commissioner?) on a manifesto for reform, what rules would we suggest that he etch into his (digital) tablet? As with all the best sets of commandments, let’s start with the “don’ts”:

01. Don’t overload antitrust enforcement with responsibilities above its pay grade.

Competition watchdogs should be mindful that measures to promote and safeguard effective competition in digital markets may sit uneasily alongside – or potentially come into direct conflict with – measures designed to satisfy other policy objectives such as safeguarding data privacy or checking online abuse. But as discussed above, competition authorities (and, truth be told, economists in general) are not well placed to make judgements about the trade-offs between these wider societal and moral values. Government itself needs to make the tough calls in these circumstances, with the help of advisers and/or regulators with specific skills in the other issues thrown up by the online world.

02. Don’t ditch the old – but do diversify.

For all the focus on digital markets and the new economy, the “old” economy has not disappeared (after all, smartphones would not exist without metals, plastics and electricity). Anyone calling for a complete retooling of competition policy would do well to consider that not all industries are characterised by rapid innovation and that the stable “classical” analytical framework underpinning much of the existing toolkit may apply quite adequately to many more mature markets. The key point is that markets have become more diverse: the “new economy” co-existing with the “old economy”, and local or regional markets co-existing with global ones. This calls for a greater diversity of tools. 

03. Don’t let the availability of evidence dictate the analytical framework.

This temptation may be difficult to resist when some types of evidence are hard to come by – e.g. on the willingness of consumers to substitute between different platforms, when they are free to use. Some commentators – including Crémer et al – have called for a return to a “characteristics-based” approach to market definition for digital markets: i.e. one based on a subjective description of the character and functionalities of the platforms in question. Indeed the EC’s approach to market definition in the recent Google Shopping case (see above) arguably followed just such an approach. But it is one that in the past led to some much-ridiculed market definition decisions. Take, for example, an infamous European Court judgement in 1978, which appeared to conclude that there was a distinct economic market for fruit that could be eaten by toothless people:

...The banana has certain characteristics, appearance, taste, softness, seedlessness, easy handling, a constant level of production which enable it to satisfy the constant needs of an important section of the population consisting of the very young, the old and the sick.

It would be a shame if the challenges in identifying evidence in digital markets tempted competition watchdogs to revert to a technique that was so discredited four decades ago.

04. Resist over-ambitious plans for you to turn yourself from an enforcement authority into a supervisory one, with different political and public expectations.

Detecting breaches of competition law and opining on mergers is one thing: maintaining supervision of the behaviour of all enterprises operating in your markets is quite another, for which competition authorities historically do not have the capability or capacity. “Interim measures” and/or ex ante market interventions not only create the danger of Type II errors identified above; they also risk responsibility for all market failures being placed at the door of the regulator. When taking steps in this direction – even those cautious ones suggested below – the watchword is: proceed with care.

So much for the negatives, but what about the positive ideas? Here are four interlinked suggestions:

01. Focus on the process, not just the outcomes.

It has become even more difficult to understand whether outcomes are competitive when markets are constantly being buffeted by the forces of innovation and globalisation. Rather than prioritising exploitative abuse cases, the effort of competition authorities should continue to be directed towards ensuring that the dynamic process of price discovery and rivalry identified by Hayek remains strong in those markets. Being able to articulate better the drivers of rival entry, innovation and product repositioning will become important to understanding if this process is working well. And this focus on the process of competition leads directly to the next recommendation...

02. Where innovation is a key driver of competition, then…focus on innovation! It makes no sense to pretend that markets are in a stable equilibrium in industries where innovation is dramatically transforming the user experience. Competition authorities have already developed tools that put innovation under the microscope. For example, in considering the proposed merger between chemical companies Dow and DuPont in 2017, the EC introduced the idea of “markets for innovation”, assessing the evidence on (i) the importance of Dow and DuPont’s R&D in driving innovation in the industry and (ii) the potential impact of the merger on the incentive to innovate. The EC also hit on a sensible solution, making its clearance of the merger conditional on DuPont divesting much of its global R&D business. Transposing such an approach to dynamic digital markets would require further thought – in particular, about where to draw the line when identifying the types of innovation relevant to the competition assessment: in the world of the “internet of things” these lines can get very blurred indeed. But it is too pessimistic to believe that the interplay between competition and innovation makes rigorous and evidencebased assessment impossible. After all, firms themselves think about this interplay all the time when making R&D investment decisions.

03. Work with the grain of competitive dynamics in digital markets.

Forcibly breaking up online markets to create strong rivals to existing incumbents might sound like a good idea – it looks tough, and has a long pedigree, dating back to the break-up of Standard Oil in the US over a century ago. But fragmentation would reduce the consumer benefits derived from economies of scale and scope, data accumulation and – perhaps most importantly – network effects. One of the main advantages of social and professional network platforms, for example, is their ability to allow people to share their experiences with all their friends and colleagues. Where these effects are powerful, breaking up the networks would be futile. The same forces that led consumers to cluster on a single platform in the first place will lead them to congregate on a single platform again. And this leads to the next recommendation…

04. Take a closer look at behavioural measures. Competition authorities notoriously have a preference for “structural” solutions to competition problems (surgical interventions to restructure the market, e.g. by requiring merging firms to divest assets). Behavioural remedies – rules requiring firms to act in a certain way – tend to be less favoured. This reluctance is understandable: implementation of behavioural measures needs to be monitored and reviewed on an ongoing basis, nudging competition authorities into a more problematic supervisory role. But if structural remedies prove unworkable, behavioural remedies may offer a valuable alternative. For example, if network effects lead a market to tip towards a single winner, it would be more sensible to accept this and – if necessary – think about regulating the winner rather than breaking it up.

Further work would of course be required to turn these proposals into a practical set of guidelines. And such evolution is not without risk: as noted above, taking a more proactive and forward-looking approach to the role of innovation may require competition authorities to make some difficult judgement calls. Care will always be needed to safeguard the essentials of good regulation: objectivity, consistency, predictability and freedom from political influence. But economists have shown themselves capable of reforming competition policy for the better in the past and, with careful thought and an eye for rigour, they can help do so again now.