Microwave Journal
www.microwavejournal.com/articles/18176-the-european-spectrum-capacity-crunch

The European Spectrum Capacity Crunch

September 14, 2012

There has been much talk about the spectrum capacity crunch. In the U.S., 500 MHz of spectrum has been approved to be freed up for cellular. A similar approach has been adopted in a number of European countries. Cellular operators are lobbying hard for more spectrum and recently the World Radio Congress decided that the 700 MHz band should become a cellular allocation throughout Europe.

The reason for all this is, of course, the rapidly growing mobile data usage, driven initially by the introduction of the iPhone and then, increasingly, other smartphones, tablets and so on. By providing a simple way to select, access and view data while mobile, these devices have dramatically changed the attractiveness of using mobile data and hence the demand. The speed at which this demand grew caught many mobile operators by surprise, resulting in severe network congestion. This led, for example, to O2 adding another 500 cells to its network in London. Where all this is heading is unclear, but Cisco forecast in the region of 18 times growth in mobile traffic between 2011 and 2016,1 fuelled predominantly by mobile video.

A crunch occurs when the demand outstrips the supply. The worry in the industry is that it will not be possible to meet this increased demand cost-effectively and as a result there will be congestion, higher prices, diminished user experience or some mix of these. This article, based in part on the 7th Annual European Spectrum Management Conference in Brussels in June 2012, looks at what can be done to meet demand and whether a crunch is really likely.

Is it Really a Spectrum Crunch?

Mobile network capacity is determined by three factors – the amount of spectrum, the efficiency of the technology and the number of cells deployed. Capacity scales approximately linearly with spectrum and technology, so doubling the spectrum provides double the capacity. Likewise, doubling the technical efficiency doubles capacity. Capacity actually scales faster with smaller cells. When, say, a single cell is replaced by four smaller ones, there is a four times gain just from reusing the spectrum more efficiently.

Figure 1

Figure 1 Effect of adding cells with lower radius (Source: W. Webb, "Understanding Weightless").

Also, because mobiles are now closer to their nearest base station, they can typically make use of more efficient modulation modes such as more QAM levels. This increases the cell throughput. The actual gains will vary dramatically from technology and depend on the deployment conditions, but an estimate of the effect can be seen in Figure 1.

In this case, if the cell size was originally 5 km radius and it was replaced by 6 cells of 2 km radius, the red line shows a capacity gain of six times from the frequency reuse but the blue line shows around 25 times in total, taking modulation efficiency improvements into account. A mobile operator would logically deploy the lowest cost option(s) to increase network capacity, until the point at which supply and demand balance. As mentioned previously, O2 selected smaller cells.

So the answer, in principle, to the question as to whether it is a spectrum crunch is “no.” It is a capacity crunch and there are three levers – spectrum, technology and cell size – that can be used to address this. But how practical are each of these levers?

Can We Find More Spectrum?

It is often said (by Americans) that, “spectrum is like real estate – they just don’t make it any more.” That is partly true, but land can be reclaimed and taller apartments can be built. Similarly spectrum can be reclaimed from other uses, such as broadcasting, or used more efficiently with techniques such as ‘white space access.’ But before looking into these, let us take a step back to understand the scale of what is needed. If an 18 times growth is expected by 2016 (and the predictions are still ramping in 2016, so it is likely that there will be even more growth beyond this) and using spectrum alone to resolve it, then the operators will need at least 18 times more spectrum than they currently have.

The amount currently available varies from country to country and depends on whether, for example, the 2.6 GHz band has been auctioned yet, but is in the region of 300 to 400 MHz in total. Taking the mid-point suggests 6.3 GHz of spectrum would be needed by 2016. Since operators require spectrum below 3 GHz to achieve viable propagation, this is clearly impossible. So there is no way whatsoever that the capacity crunch can be wholly addressed by using more spectrum.

In fact, one can be fairly sure about what spectrum might be reclaimed between now and 2016, since the process takes at least five years. Immediately, it is the 800 MHz and 2.6 GHz bands – already auctioned in many European countries. This comes to approximately 250 MHz of spectrum. After 2020, approximately 100 MHz might be added to that in the 700 MHz band and perhaps the 2.3 GHz band. Some release of military spectrum in some countries is possible, but unlikely to be of much use unless globally harmonized. So even assuming twice an increase in reclaimed spectrum is optimistic.

The other option is to make better use of what is already available, using ‘white space’ concepts whereby secondary users can temporarily use spectrum that the license holders are not using themselves. Much work has been done on accessing the white space in and around TV transmitters and this looks very promising for a range of new technologies including machine-to-machine systems, based on the emerging Weightless standard.

But, at best, there is approximately 100 MHz of white space available in the TV bands and there are many locations where this falls to 10 to 20 MHz. This is fine for M2M but of no real value to cellular. Of course, white space access could be extended to other bands, could be used in an unlicensed or licensed format (the latter sometimes known as authorized shared access or ASA) and will be extremely valuable in stimulating innovation and new applications, but it is not the savior that cellular needs. So, broadly, spectrum can provide at best twice the gain. There is a complication here that cellular traffic could be offloaded into other spectrum, using Wi-Fi bands, which will be considered later.

Are There Technical Solutions?

So if not spectrum, can technology take the strain? Radio systems are often measured in bits/Hz/cell. If the efficiency could be doubled, could twice as many bits/Hz and twice the capacity be obtained? Could some new technology improve capacity by 18 times?

There are many contenders for improvements in efficiency. Multiple antenna or MIMO systems hold the most promise, with apparently large gains possible, if large numbers of antennas can be supported – this is a key ongoing area of research for many in the RF community.

In particular, finding ways to embed two, four or even eight antennae into a mobile device in a way that each is relatively independent of the other, while still being able to handle multiple frequency bands, is extremely challenging, but ultimately the key mechanism where gains will be realized. Methods to deploy large numbers of antennae unobtrusively on base station towers are also called for.

Another approach is transmissions from multiple cells – called coordinated multi-point transmission (CoMP) in 4G. Here, the same signal is sent simultaneously to a mobile from multiple nearby base stations, allowing it to aggregate the signal level. This can bring benefits by enabling higher modulation modes or lower levels of error correction to be used. Carrier aggregation is also part of LTE, whereby a device can simultaneously use multiple bands of frequencies to deliver higher data rates.

While not directly impacting capacity, it might make smaller bands of spectrum more viable than would otherwise be the case. This leads to further problems. Making handsets that can work across large numbers of bands is very challenging. There are now some 40 bands identified for 4G – it seems unlikely that any handset could support them all. So improvements, particularly in RF technologies, to facilitate wider band RF front-ends, better isolation between different bands and broadband antennas will be critically important.

Even with all these enhancements, cellular systems are pushing hard against the boundaries of physical limits and most estimate that practical 4G systems will be about twice better than current 3G systems. Even that may come at some cost in terms of complexity, device form-factor, battery life and more.

At the 7th Annual European Spectrum Management Conference, panelists from a number of European companies offered a range of other ideas. One suggested using satellite to broadcast the most popular video content, thus avoiding sending it individually to multiple devices. But this requires handsets to have satellite antennas and that many users watch the same content simultaneously, which is likely on home TV screens but less so on mobile devices.

Another looked at developments in small cells and suggested very small form-factor base stations linked by fiber optic cable. The base station then essentially becomes an upconverter, taking the signal on the fiber and amplifying and transmitting as RF. This is one way to realize small cells cost-effectively but does require widespread low-cost fiber deployment, which is often not available. A strong theme from many was sharing spectrum using innovative new access techniques as mentioned above. So, that gives twice from spectrum and twice from technology, neither quite arriving by 2016 – for a total of four times. That is still a long way from the 18 times needed.

So What About Smaller Cells?

Unlike spectrum, which “they don’t make any more” and technology, which is limited by Shannon’s Law of transmission capacity, cell sites are unlimited. The only constraining factor is cost (it can be difficult to find sites and to get permission to use them, but typically time and enough money can overcome these issues). This is why over the last few decades most of the gains in capacity have come from more cell sites.

For example, Marty Cooper, in forming Cooper’s Law, noted that wireless network capacity world-wide improved about 1 million-fold from 1950 to 2000. This comprised 15 times from more spectrum, 25 times from more efficiency and a massive 2,700 times from more cells.

If the 18 times demand growth forecast is assumed, then with at best twice the spectrum and twice the efficiency, approximately five times the cell numbers are needed – trivial compared to historical standards. Or alternatively, operators could opt for once the spectrum, twice the efficiency and six times the cell count or any other combination. This shows that whatever happens, a large increase in cell numbers is needed to meet predicted demand growth.

So the number of small cells is limited only by cost. Cost is far from a trivial factor, of course, and especially important when the expectation (strangely) is that the 18 times increase in capacity will be delivered without changing the average revenue per user (ARPU) much. Above all, low-cost and widely available backhaul will be needed to enable such small cell deployment.

But We Have Lots of Them!

Actually, there are small cells – lots of them – in the form of Wi-Fi in homes, offices, shops, restaurants and many more locations. In a typical European country, each mobile operator has approximately 15,000 base stations but, in the UK, BT has collected over 3 million Wi-Fi nodes together under its sharing project called FON, where home owners open up their Wi-Fi in return for free access to other home owners who do the same. So BT has assembled two orders of magnitude more cells than mobile operators.

Simply finding a way to seamlessly integrate Wi-Fi cells and femtocells into mobile networks would mostly solve the mobile data crunch, assuming a reasonable degree of correlation between demand and Wi-Fi hotspots (which seems likely in all but a few special cases such as trains). Such an approach might require some regulatory change, a small amount of standardization of ‘feed-in’ capacity and modifications to the cellular value model, but these are relatively minor compared to the difficulties involved in repurposing spectrum.

Figure 2

Figure 2 Split of traffic from smartphones in Europe on a typical weekday (Source: Richard Thanki).

This has not escaped the notice of cellular operators. Most ‘cellular’ traffic is already offloaded onto Wi-Fi – for example Figure 2, from a recent study, shows the split of traffic from smartphones in Europe on a typical weekday. Clearly, something like 5 to 10 times as much traffic is being handled via Wi-Fi as cellular. This off-loading is only likely to grow as more Wi-Fi nodes are added and initiatives such as HotSpot 2.0 make it seamless to log onto nearby Wi-Fi base stations.

What if We Do Not Resolve It?

So, demand for mobile data might grow hugely if it were essentially free. However, mobile operators face a cost in enhancing their networks to increase capacity, either in technology upgrade, spectrum auction fees or cell deployment costs. They will select the lowest cost option first, then the next lowest and so on.

So, it will become progressively more expensive to grow capacity. Since operators are generally not philanthropists, their data charges must grow to meet these costs. Increased costs will dampen demand and equilibrium will prevail. There will be no more data crunch problems. Where this balance might fall is hard to predict and likely to change over time as new, more valuable, applications emerge.

All of this is really basic economics and engineering. Issues become more complex when some argue that mobile broadband is too important to be left to markets alone. They suggest that its value to society in enabling innovation, productivity, social inclusion or other important goals is so great that price should not be allowed to restrict demand.

This is somewhat akin to saying that electricity is so important to society that all homes should be supplied with all they want at a flat rate. Few would accept flat-rate electricity is appropriate. With mobile devices predominantly used for entertainment, it is hard to see how important societal goals are achieved, other than enabling new high scores on Angry Birds.

Let us assume that there were important societal reasons for delivering unlimited mobile broadband. Governments could achieve these goals through indirect subsidies – predominantly via the provision of more spectrum at low cost – or direct subsidies – paying mobile operators to provide flat rate services. The latter is akin to the subsidies paid to rail operators by some governments to provide low-cost (although not unlimited) train travel.

Economists are clear that direct subsidies are preferable to indirect subsidies. Spectrum should be used efficiently and by the entity that values it the most, but not necessarily as a policy tool in delivery of socially valuable objectives. So, if governments really considered mobile broadband an important social good, they should subsidize mobile operators to provide it.

In some cases, markets are not perfect. In the case of mobile broadband, users have limited awareness of their usage and so can find it difficult to adapt their behavior according to price. This is why most prefer bundles, which simplify and provides certainty in their bill. This does complicate supply and demand somewhat but is not insurmountable in markets such as electricity, where users generally do not understand exactly how much each appliance uses, but have developed a sufficiently good approximation that they are rarely surprised by bills.

How It Will Likely Play Out

As is often the case, the solution is likely to be to scrape through. Operators will get more spectrum (twice), will progressively deploy LTE (twice) and will deploy a few new cells (probably more than twice). They will use ever more Wi-Fi offload (perhaps four times) or more likely users will arrange this for themselves. All of this likely will not be quite enough and will be expensive, so data costs, especially for heavier users, will rise. This may not be because operators raise their prices per GB, they may in fact reduce them, but because they will not reduce them by 18 times, the net effect to end users is an increased cost. This will dampen demand somewhat so it reduces back towards the economically sensible supply.

Resolving the mobile data crunch may not be so difficult after all!

Reference

  1. Cisco’s “Mobile Visual Networking Index,” published March 2012.

William Webb

William Webb has a first class honors degree in electronics, a PhD and an MBA. He is CTO and one of the founding directors of Neul Ltd., a company developing machine-to-machine technologies and networks, which was formed at the start of 2011. Prior to this, he was a director of technology at Ofcom, where he managed a team providing technical advice and performing research across all areas of Ofcom’s regulatory remit. He also led some of the major reviews conducted by Ofcom including the Spectrum Framework Review, the development of Spectrum Usage Rights and most recently Cognitive or White Space Policy. Previously, Webb worked for a range of communications consultancies in the UK in the fields of hardware design, computer simulation, propagation modeling, spectrum management and strategy development. He also spent three years providing strategic management across Motorola’s entire communications portfolio, based in Chicago.