With the rollout of 5G, an important link of the new radio network infrastructure chain is the power amplifier (PA) in the gNodeB base station. The PAs are expected to operate without failure, often in extreme conditions, and the 5G standards cover broader bandwidths at higher frequencies and with higher efficiency. Accordingly, the PAs and the semiconductor devices comprising them have more rigorous and strenuous testing requirements compared to the RF front-ends in mobile devices.
Traditionally, Si laterally-diffused metal-oxide semiconductor (LDMOS) was the preferred process technology for base station PAs; however, as LDMOS has performance degradation as the frequency increases beyond 3 GHz, GaN on SiC has emerged as a competitive alternative, offering distinct advantages for high power applications. One of the most significant advantages is its larger bandgap compared to Si, which yields a larger breakdown voltage and more thermal stability at higher temperatures. Secondly, GaN on SiC has higher thermal conductivity compared to Si, which means higher efficiency at comparable operating voltage, reducing the challenge of thermal management. Lastly, GaN’s breakdown field is significantly higher than Si’s, with 10x the voltage handling before failure. This enables GaN devices to be manufactured with smaller die size despite the higher power density.
The advantages of this process technology bring unique test challenges to qualify the performance and robustness of RF GaN devices and MMICs. As GaN is a relative newcomer in base station PAs, production testing is a mix of process characterization, performance qualification and reliability evaluation. The higher performance characteristics and unique biasing required for GaN amplifiers add complexity compared to traditional LDMOS amplifiers. This article discusses how test systems can address this complexity.
Compared to enhancement mode LDMOS, GaN HEMTs require both positive drain and negative gate bias from the test equipment. While the voltages are conventional, the idle state voltages of the supplies and the sequencing of the DC bias are deceptively tricky to avoid damage to the device under test (DUT) or the test equipment. As a depletion mode FET, GaN devices require a negative pinch-off voltage at the gate to keep the transistor turned off before applying the drain voltage, followed by a gate voltage adjustment to the correct bias voltage before RF testing. The reverse of this sequence must be applied at the end of testing and before the next device is tested. This requires the test equipment to have specialized sequencing and idle state control, and the device interface must provide a “fail safe” to prevent a faulty device from damaging the socket or test equipment.
With lower “on” resistance compared to Si and a high breakdown voltage, GaN devices require high voltage and low current testing. Characterization of the breakdown voltage is common, requiring voltages greater than 100 V while simultaneously measuring current in the pA to nA range. This testing requires fast and precise response times from the equipment, to abort the test once the breakdown voltage is exceeded and avoid permanently damaging or degrading the DUT.
One of the most significant challenges of testing GaN RF devices is the prominent role and requirements of microwave test in evaluating performance. As 5G standards push frequencies above 3 GHz with stringent requirements for output power, linearity and efficiency, GaN device manufacturers are relying on the RF performance to differentiate their products from LDMOS. This requires the production test environment to emulate the traditional microwave test bench. However, instead of multiple, dedicated benchtop instruments or custom load board solutions, comprehensive, integrated and modular microwave automated test equipment (ATE) solutions are favored (see Figure 1). This need resulted in the development of ATE architectures delivering comparable performance to bench instruments while providing configurable test resources customizable to the application and with the flexibility to meet shifting performance requirements. This capability handles a range of microwave measurements with only a single device insertion.
The challenge testing GaN base station amplifiers in production is the combination of high RF power, high frequency and high measurement precision. These factors influence multiple facets of the ATE configuration, capabilities, interface design and calibration. On the equipment side, the microwave source and measurement instrumentation have moved from the device interface into dedicated microwave instruments in the ATE combining spectrum analyzer, power meter and vector network analyzer capabilities. This integration enables broad frequency and test coverage with specified measurements and integrated calibration. It removes instrument complexity from the device interface design, so it plays a more application-specific role as the load string for high power conditions and signal provisioning for specialized measurements (see Figure 2).
Amplifier measurements typically include gain, gain flatness, efficiency, adjacent channel power (ACPR), linearity (EVM) and other linearity specifications such as P1dB and P3dB. With 5G PAs, the performance levels and higher frequency requirements place greater importance on calibration. While the type of instrumentation and measurement will dictate whether a vector or scalar power calibration is required, the benefits and limitations of each pertaining to amplifier testing should be understood.
Scalar calibration has advantages in linearity and efficiency measurements where the power accuracy is critical. Scalar calibration typically involves a broadband power meter with sensor to determine the signal power at the measurement plane. However, the power sensor does not differentiate spurious or harmonic signal power from the source power; the higher power with PA testing increases the likelihood of spurs and harmonics. To understand these sources of measurement error, the test designer must assess the test system’s source distortion, the receiver’s bandwidth and its out-of-band rejection.
Vector calibration has advantages for relative measurements such as gain and return loss. The calibration is able to correct for mismatch and accurately account for how much signal is absorbed or reflected from the DUT, which can affect the accuracy of power-added efficiency (PAE) measurements. PAE requires an accurate measurement of amplifier’s actual input power, accounting for the often poor input match. The challenge for vector calibration is the higher signal power involved. While the small-signal model of PAs more closely resembles a 50 Ω environment, the large-signal characteristics can differ significantly, presenting VSWRs that cause large peak-to-peak gain errors. This warrants signal conditioning in the source to reduce large reflections that can damage the test equipment or the DUT. The setup must also mirror the actual test and be repeatable, so the calibration accurately corrects matching errors during device testing.
The calibration reference plane should be as close to the DUT as possible to achieve an accurate representation of performance. This means the device interface environment must provide access to the microwave source and measurement port connections and be compatible connecting to power sensors and the open/short/load standards (see Figure 3). ATE that supports multi-layer calibration software for individual instruments, multi-instrument system integration and device interfaces provide an advantage. This capability enables the ATE components to be efficiently and reliably exchanged and replaced on the production test floor.
Because of the high power density of GaN, the upper limits of the power and thermal handling of the device must be qualified. Despite GaN’s enhanced thermal properties, heat dissipation is a problem in a production test setting, as it is impractical to provide adequate heat sinking for on-wafer testing and most packaged devices. Therefore, most testing uses pulsed DC and RF measurements, which avoids damaging or degrading the DUT while providing in-situ conditions and robust qualification.
For instance, a typical P1dB or P3dB measurement involves several tests, all requiring fast, stable and repeatable measurements to accurately determine linearity. The first step is to determine the DC bias conditions at normal operation and ensure it is within the expected range. With the gate and drain bias set, a sweep of input RF power at one or multiple frequencies provides the linear gain and compression characteristics. Because of process and package variations, the sweep range can vary from 5 to over 20 dB with step size under 0.5 dB to capture the linear range and compression of the power transfer characteristics. From the measured output power sweep, the input power for the 1 dB and 3 dB compression points are extracted.
When Power Accuracy Matters, A Tuned Receiver Is Preferred
When correlating measured power between different setups, there is often a discrepancy between power meters and tuned receivers. PAs are generally driven into compression, which means there can be significant power in the harmonics. A tuned receiver measures only at a limited bandwidth around the desired frequency and rejects signals at other frequencies. A typical power meter is untuned and measures the total combined power of the signals in its operating bandwidth.
Example: A GaN PA operating at 5.6 GHz generates harmonics at 11.2, 16.8, 22.4 … GHz. The power meter reflects the combined power of the fundamental and all harmonics in its bandwidth, while a tuned receiver reports only the fundamental power at 5.6 GHz. While both measurements are technically correct, the user usually wants just the in-band power.
Another significant measurement is efficiency or PAE, the amplifier’s power conversion ability. Efficiency is defined as the ratio of RF output power to the corresponding DC input power, expressed as a percentage; PAE subtracts the RF input power from the calculation. These measurements are indicators of end-use performance and competitive differentiation. While the definition of efficiency is straightforward, the measurement can be challenging as it relies on several underlying tests. For product specifications, efficiency is typically computed at multiple power conditions. In production test, the measurement typically happens in two steps: 1) a sweep of input power to characterize the amplifier’s power curve, followed by 2) RF and DC measurements at specific output power values: within the linear range, at 1 dB compression and 3 dB compression. Both the length of the power sweep and speed of the instrumentation must avoid thermal heating of the DUT, both to avoid degradation to the device and to maintain a consistent thermal environment when measuring the RF output power at various points, necessary to ensure accuracy of the efficiency calculation. This places the following requirements on the instrumentation:
- The sweep requires a pulsed RF source and a measurement capture of a few milliseconds.
- The measurement accuracy of the output power sweep is critical since the input power settings on the second pass depend on the measured output power. For efficiency measurements in the linear power range, a ±0.1 dB difference in measured output power translates to almost a ±0.5 point difference in efficiency. This is exacerbated when the amplifier is in compression, as each ±0.1 dB difference in measured output power translates to approximately a ±1 point difference in efficiency.
- The source linearity and measurement repeatability of the test system will affect the accuracy.
- Amplifiers generate harmonics from the high RF power and circuit design, which are undesirable and often overlooked or disregarded when measuring efficiency, despite being contributing factors to measurement uncertainty. This should be considered in the design of the test setup. For accurate efficiency measurements, a narrowband, tuned receiver is better than a broadband receiver (see sidebar).
Despite the challenges described in this article, current ATE systems provide accurate microwave measurements of GaN PAs in a manufacturing environment, meeting the test time, cost and reliability expectations of production. Table 1 shows the measurement repeatability achievable for the most common PA measurements.
Semiconductor process development will continue to improve device performance, and production testing of these devices will remain a mix of characterization and qualification. Ensuring the quality of the test data is important to reduce process variation, improve device yield and help predict how process changes affect device reliability. Testing requirements will continue to evolve as 5G-Advanced and 6G standards move operating frequencies higher, and GaN device technology extends its advantages to higher frequencies.