The proposed capability enhancements from 4G LTE to 5G new radio (5G NR) are a massive leap designed to boost mobile telecommunications applications and enable many more opportunities. Aside from significantly improving all major performance metrics, the 4G to 5G transition also heralds a more flexible and capable radio architecture with additional mmWave frequency spectrum on top of legacy 4G LTE-Advanced (LTE-A) and new sub-6 GHz frequency bands. 5G also intrinsically supports new use cases beyond enhanced mobile broadband (eMBB), including ultra-reliable low latency communications (uRLLC) and massive machine-type communications (mMTC). Plans also exist to further expand the 5G frequency bands to cover both licensed and unlicensed mmWave spectrum. Moreover, 5G NR allows for both frequency-division duplex (FDD) and time-division duplex (TDD) operation with wider channel bandwidths, user equipment (UE) with increased maximum power, higher-order modulation schemes and multi-antenna architectures. 5G NR RF front-end (RFFE) designers benefit from understanding these trends and aspects of the new RF hardware and technologies needed to address these new challenges.
5G NR TRENDS
The 5G NR deployment ramp up is in full swing with many organizations striving to achieve 5G NR performance goals. The performance goals for 5G, along with how they compare with 4G are shown in Figure 1. Figure 2 shows some of the new 5G NR spectrum allocations. The mmWave frequency ranges (FR) that are becoming known as FR2-1 and FR2-2 are of great interest for several reasons. There is a large amount of available bandwidth, these bands lack other interfering deployments and the size of the RF hardware elements and the antenna are proportionally smaller. Perhaps a bit counterintuitively, proponents are viewing the increased atmospheric attenuation at these frequencies as a benefit that can aid in mitigating interference.
FR2 mmWave technology allows for advanced/active antenna systems (AAS) that are extremely compact, along with sophisticated MIMO and beamforming systems with higher throughput compared to 4G LTE-A technologies. The 5G NR enhancement over 4G LTE-A and new frequency bands enable greater capacity, connection density, peak data rates and user-experienced data rates. 5G NR also comes with increased modulation schemes, new encoding and additional layers that support new use cases. These combined features in the 5G NR standards empower 5G to boost mobility, reduce latency, enable higher network energy efficiency and provide better spectral efficiency than 4G LTE-A.
5G NR introduced the concept of a bandwidth part (BWP) as a set of contiguous resource blocks (RBs) that can be set to different transmission bandwidths. Each BWP can have its own numerology and while multiple BWPs can be defined for a given carrier component (CC), only one BWP can be active at a time in downlink (DL) and uplink (UL). The introduction of BWPs enables far more flexible use of spectrum based upon individual UE use cases.
Compared to a maximum channel bandwidth of 20 MHz per CC for 4G LTE-A, 5G NR FR1 can use a 50 MHz maximum channel bandwidth per CC with 15 kHz subcarrier spacing (SCS) and 100 MHz with 30 kHz or 60 kHz SCS. FR2-1 allows for 200 MHz when using 60 kHz SCS and up to 400 MHz when using 120 kHz SCS. The subcarrier spacings for different time slots are shown in Figure 3. Table 1 shows the higher SCS and maximum transmission bandwidth allotments that are being considered for the 52 to 71 GHz frequency range of FR2-2.
4G LTE-A was essentially designed for mobile broadband, though it was possible to configure 4G LTE-A to support other functions within that framework. 5G NR has additional use case features built into the standard that support new applications for cellular wireless technology. The three initial and key use cases for 5G NR are eMBB, uRLLC and mMTC. Each use case has details of the specification and features designed to support the use case in ways that would not be feasible with a one-size-fits-all solution.
As an example, uRLLC requires much lower latency and higher mobility for vehicle-to-infrastructure (V2I) or vehicle-to-vehicle (V2V) applications for autonomous vehicles or driver safety features. However, uRLLC applications do not prioritize capacity, peak data rate, UE data rate or spectral/network efficiency as much as eMBB applications do. Similarly, mMTC use cases prioritize connection density and network efficiency over other performance metrics to better serve communications among multitudes of machine-type sensors, actuators and beacons. Figure 4 shows a spider diagram of the relative importance of the 5G NR network goals to the three main use cases.
5G NR RFFE DESIGN CHALLENGES
The performance and capability enhancements of 5G NR versus 4G LTE-A bring additional challenges that RFFE designers must address. One of these challenges is TDD asynchronous inter-band operation. This becomes especially important when the transmission noise of one band falls into the receive band of a second band when they share an overlapping time slot. There are over 50 UL versus DL slot allocation formats for use in any one band and there are multiple band combinations where this overlap could occur. The significance of this potential interference and network operation is a function of the noise levels, intermodulation (IMD) products and filter rejection levels of the hardware. Figure 5 shows the extent of this potential problem.
The increased complexity of additional bands and uplink band combinations of 5G NR compared to previous generations leads to an increased risk for self-desense or self-interference, especially in the sub-6 GHz frequency band. Higher maximum and average power levels with 5G NR TDD, as shown in Figure 6, are other factors to consider. 5G NR presents more than 60 FR1 band definitions with over 3500 carrier aggregation and dual-connectivity (DC) band combinations, along with the potential for asynchronous operation. This includes multi-mode operation with the UE and base stations operating LTE-A and 5G NR transceivers simultaneously (EN-DC). If any of these combinations result in substantial IMD distortion products, leakage, noise or other interference injected into the receiver, the receiver’s sensitivity is reduced.
Maximum sensitivity degradation (MSD) is the metric in the 5G NR standard that defines the permissible degradation of the receiver’s sensitivity for a particular band combination. MSD is the amount the RX sensitivity (REFSENS) is degraded for a particular band combination. This value depends upon parameters like maximum TX power, isolation levels, linearity, bandwidth and carrier frequency. These factors also include self-interference caused by coupling and crosstalk effects between the various functional blocks of the RFIC, RF modules, phone board and the entire RFFE.
Figure 7 provides an example of the self-desense effect between two 5G NR FR1 frequency bands. In this case, the UL transmission of Band n78 (3300 to 3800 MHz) is coupling, possibly from multiple locations in the UL signal routing, to the UL transmission of Band n3 (1710 to 1785 MHz uplink). This causes intermodulation products at the Band n3 DL receive signal chain of the RFFE (1805 to 1880 MHz). This is the case shown in Figure 8. In this example, Band n78 is a TDD band while Band n3 is an FDD band, so the UL and DL of Band n3 are at different frequencies. However, the second-order intermodulation distortion products (IMD2) created by the mixing of interference coupled from the UL of Band n78 and the UL of Band n3 into the DL of Band n3 could result in high IMD products falling within the Band n3 DL. That signal may contain enough energy to desensitize the receiver in Band n3, worsening the bit-error rate due to SNR degradation and degrade throughput.