Microwave Journal

Online Spotlight: RF Data Converter Performance and Evaluation Methods

September 14, 2022

Challenges remain in developing a consensus for measuring the performance of RF data converters. Many questions arise that are the cause of contention and arguments as suppliers try to convince their prospective customers about performance metrics. How does one come up with ‘a single number,’ such as effective number of bits (ENOB) or spurious free dynamic range (SFDR), and under what conditions, to tell the whole story? For example, does one simply use a single sine wave test tone across the entire bandwidth of interest? What about using a two-tone test to measure IM3 distortion?

RF data converters are typically used in radio transceivers, so would it be better to use actual modulated QAM signals and measure error vector magnitude (EVM) and adjacent channel leakage ratio? If so, should the test include equalization to open the eye and get better EVM? How does digital predistortion or post-distortion of amplifiers and buffers in the signal path factor into the picture?

Even more significant as a new trend, nonlinear equalization (NLEQ) is being used to make the raw data converter look dramatically better than it is in its native form. NLEQ typically requires computationally intensive off-chip DSP with iterative training and adaptation. For a known input signal condition, NLEQ can dramatically improve harmonic distortion and interleaving spurious performance, resulting in vastly better SFDR. The data converter providers need to inform their customers of the details of performance under NLEQ, the cost factor in computational hardware, and the limitations in actual applications.

This article provides a comprehensive overview that addresses these questions and helps customers make better, more informed decisions while correcting misunderstandings between marketing/sales claims versus ‘meaningful’ engineering evaluations.

As semiconductor technology continues to miniaturize at the nanometer scale, the sample rates of digital signal processing have increased accordingly, creating a demand for wideband RF data conversion. Analog-to-digital converter (ADC) sample rates have reached into the 100 giga samples per second (GSPS) range, albeit the ENOB and SFDR are reduced at these higher rates. A tutorial by Norsworthy1 provides the basic principles governing RF data conversion. Some of that material is repeated herein for convenience.

First, what is the definition of an RF data converter? It could simply be defined as one with an arbitrarily high sample rate in the GSPS range. However, it often employs direct sampling of the RF incoming signal after some analog signal conditioning and amplification between an antenna and the actual ADC, where the frequency translation from RF to baseband occurs digitally after sampling, without the need for an analog down-conversion mixer prior to the ADC. This would set it apart from a more traditional approach using ‘direct conversion’ or ‘low IF.’1 In situations where the frequency of the incoming RF signal is above the first Nyquist zone of the ADC and requires an analog mixer before the ADC, it could alternatively be used in a ‘high IF’ conversion architecture.

While a direct RF sampling receiver is desired for software defined radios, it also presents the greatest performance demands on the ADC, including higher power consumption, without necessarily the highest overall receiver performance. An RF sampling receiver can be well suited for broadband scanning applications but is not well suited to maximize signal-to-noise ratio for transceivers or, in general, channelized communications, because jammers or competing signals consume more dynamic range that the ADC can accommodate.


A broad consensus exists on testing and evaluation methods for ADCs. The IEEE has a benchmark standard on this subject.2 It is intended for individuals and organizations who specify ADCs to be purchased and suppliers interested in providing high-quality and high-performance ADCs to acquirers. Numerous excellent tutorials have also been written and the reader is encouraged to become familiar with this rich background.3–8 That being said, what matters most is that the consensus begins to break down as ADC sampling rates reach into the GSPS range. We will explore some the reasons for these ambiguities.

Both DC and AC parameters of ADCs are characterized. The DC parameters include differential and integral nonlinearities (DNL and INL). AC parameters include harmonic distortion, intermodulation distortion (IMD), thermal noise, phase noise and jitter.

A commonly used ADC ENOB formula is given by

where SNDR is the ratio of the captured signal power to the aggregate power of both noise and distortion.

For example, if an ADC operates with 1 dB of headroom below full scale (FS) and achieves an SNDR of 50 dB, the ENOB will be approximately 8 bits. Note that the SNDR can be referenced to FS using dBFS. ADCs suffer from higher distortion levels when the input signal power is near FS, and the measurements made at lower input power do not linearly extrapolate to measurements at higher input powers.

The input analog linear range of ADCs decreases as voltage supply levels decrease. What was possible with a wider linear range at 5 V power supply levels is now severely reduced at 1 V supply levels. This results in lower saturation levels at the analog input buffer, causing higher distortion relative to the noise floor, readily seen as harmonic distortion using single tone inputs, as well as IMD using multi-tone inputs. The laws of physics governing thermal noise, phase noise, and jitter remain unchanged with supply voltage (except for mild differences in 1/ƒ noise), so that the ultimate resolution of ADCs is limited by fundamental physics and device geometry.

Because the ultimate performance measure of interest is proportional to the energy per conversion for a given number of effective quantization levels, one typically looks to Walden’s overall ADC FOM to determine and benchmark its power consumption P for a given sample rate ƒs and ENOB. The FOM is the energy consumption per conversion step expressed in Joules/conv-step and given by

A lower value of Walden’s FOM corresponds to superior performance. A higher-order effect not expressed by Equation (2) is that the power consumption for a given ENOB does not scale linearly with the sample rate but worsens exponentially above certain sampling rates. FOMs for several ADC IP blocks are reported in the academic literature and in the updated ‘Walden’ tables (now the ‘Murmann’ tables from Stanford University),9 (see Figure 1). This FOM plot clearly shows that there is an inflection point for the performance boundary above 100 MHz, where the best FOM levels rise nearly one order of magnitude for each order of magnitude of increasing sampling Nyquist frequency.

Figure 1

Figure 1 The Walden/Murmann FOM plot versus speed.9

Since the FOM worsens with increasing sample rate and resolution, so does the product’s size, weight, power and cost (SWAP-C). SWAP-C will ultimately determine if the product can be fielded. As a critical component of the product, the ADC will likely be an important factor determining that decision. If the ADC drives too high of a SWAP-C, a more conventional architecture may be prudent, employing lower-frequency ADCs with analog up/down conversion in front of the ADC.


The jitter from a clock driving a sample/hold of the ADC is often the most deleterious limiting factor in RF data conversion. The Input Frequency-SNDR trend found by Murmann9 and shown in Figure 2 shows that the highest frequency converters reported so far have achieved a noise floor equivalent of no better than what can be achieved with a clock having a 0.1 picosecond RMS jitter level (dashed line), with an ideal ADC in all other aspects.

The jitter effect depends on the incoming frequency of the signal being sampled, and not the sampling frequency. Errors are induced because the sample-to-sample time fluctuates about the ideal period. The maximum amplitude error is where the incoming signal is at its greatest slope, at the zero crossing. The maximum phase error is where the incoming signal is at its lowest slope, at the positive or negative peaks. Both errors are at their worst at the highest incoming frequency.

Figure 2

Figure 2 The Walden/Murmann aperture error plot.9

A simple and convenient way of understanding jitter error is through the relationship between RMS jitter time relative to the period of an incoming sine wave. The signal-to-jitter noise ratio is thus simply the inverse of the radian fraction of the standard deviation of the jitter, which results in

where ΔtRMS is the standard deviation of the period jitter and ƒin is the incoming signal frequency. Plots of the relationship for an ideal ADC with ΔtRMS values of 1 and 0.1 picosecond RMS are shown in Figure 2 by the solid and dashed lines, respectively.

A simple formula for calculating the ENOB due to jitter is2

For example, if the incoming sine wave is 20 GHz and the RMS jitter is 50 femtoseconds, then the fractional period of jitter is 1/1000, and the ENOB due to jitter is 7 bits.

Given a phase noise spectral density L(ω) around the sampling frequency of the clock, the period jitter is simply the square root of the integrated double-sided phase noise power divided by the sampling frequency ω0 in radians per second.

The phase noise at frequencies below 10 kHz offset from an oscillator’s frequency typically has a lowpass slope of 1/ƒ2 transitioning to a 1/ƒ slope from 10 kHz out to around 10 MHz, depending on the design of the oscillator. Beyond 10 MHz offset, the phase noise power density flattens out and has an effective bandwidth well beyond the sample rate, because the sample clock must have a bandwidth many times higher than the sample frequency to have a sharp-enough edge for sampling. Consequently, in wideband samplers, the accumulated wideband phase noise usually dominates the relatively small amount of narrowband close-in phase noise near the oscillator frequency. All the high frequency components of phase noise beyond the sampling frequency are effectively aliased into the baseband of the ADC.

Because RF receiver design must include a noise figure (NF) analysis of the entire receiver chain, the NF of the ADC must be determined so that an adequate amount of gain is placed in front of the ADC. For an ADC, the NF is defined as the ratio of the total effective input noise power of the ADC to the amount of that noise power caused by the source resistance alone. Because the impedance is matched, the square of the voltage noise can be used instead of noise power.

ADCs have relatively high NFs compared to other RF parts, such as low noise amplifiers (LNA) or mixers. In an actual system application, however, the ADC is typically preceded by at least one low noise gain block which reduces the overall ADC noise contribution to a very small level.

For example, an ADC having a full-scale RMS signal input of 1.13 V into 50 Ω will have a +14 dBm input level. If the bandwidth is 40 MHz, then the NF is 30.1 dB. An example from the Analog Devices MT-006 Tutorial3 shows a Friis analysis demonstrating that a 25 dB LNA gain stage having a 4 dB NF in front of the ADC will bring down the net effective cascaded NF to 7.53 dB.


Significant testing challenges are presented to both the manufacturer of RF ADCs, and more so to their customers who want to verify performance in their application environments to determine the impact of the RF ADC on their system designs. At such high sample rates, the test equipment needed to verify the manufacturer’s claimed specifications is extremely costly. Bench test equipment costs can easily exceed $1 million USD and require a subject-matter test expert to perform the measurements and obtain reliable data. The expert must also comprehend the impact of the result on the end-system.

Is there a single reliable general parameter such as ENOB or SFDR that is the ultimate dispositive parameter? The simple answer is: ‘No.’ A comprehensive range of parameters for characterization is generally necessary, but each application for which the ADC is used will make certain parameters more important than others.

For example, clock jitter is a determining performance metric that limits the ENOB at frequencies approaching the Nyquist rate. If 7 ENOB requires a 50 femtosecond RMS total jitter budget and that budget requires a sub-50 femtosecond jitter from the system clock source driving the ADC, but the system can only produce a 100 femtosecond jitter sample clock, is that effective loss of 1 bit from 7 to 6 bits acceptable to the customer?

ADC nonlinear distortions are primarily manifested in their INL characteristics, describing deviation from an ideal linear monotonic response. Unfortunately, suppliers specify INL only by the maximum static (DC) deviation in LSB units from the ideal staircase linear response. This single point measure provides very little information on the nature and curvature of the deviation across the full input code range, from which dynamic IMD can be estimated.

When the ADC supplier does not provide specifications for a two-tone test of third-order intermodulation (IM3) characterization, what are customers to do if they then test the ADC for IM3 and third-order input intercept point (IIP3), resulting in an unacceptable IM3 and IIP3 measures? Until recently, most ADC suppliers did not provide in-band IMD characterization, but rather a single tone sine wave testing for out-of-band harmonic distortion. However, RF amplifier or mixer design usually entails characterizing the IIP3 using a two-tone test.7

Most data converter suppliers have, to date, not provided this type of test data to their customers due to their belief that third-order harmonic distortion measures provide the requisite information on the same third-order distortion coefficient that governs in-band IM3. However, owing to the high frequency response roll-off at the third harmonic, thereby creating the appearance of lower third-order distortion, this presumption typically results in underestimating the in-band IM3 distortion.

Admittedly, proper two-tone testing is more exacting than single tone testing. The method requires two very pure and well-isolated RF sources with high Q bandpass filters at each source output to ensure minimum nonlinear parametric cross-modulation components that may be confused with the IM3 tones. While RF engineers are historically well acquainted with these types of test procedures, data conversion suppliers are not. The situation will likely correct itself as RF ADCs become more mainstream.

As newer semiconductor technologies have driven power supply levels down to 1 V, the input buffer or track-and-hold amplifiers at the front end of the ADCs are now exhibiting lower saturation levels. Hence, two-tone distortion measurement methods are becoming essential for characterizing high speed ADCs.


NLEQ in communication systems design originated in the 1970s or earlier, and became a standard staple in high-performance modems in telecommunications by the early 1990s. The proper use of NLEQ requires characterization of the system exhibiting the nonlinear behavior under specific signal conditions. Modern NLEQ techniques entail an adaptive convergence of the estimates of a Volterra series’ coefficients to mimic the inverse character of the nonlinearities.

Take the simplest example, a single tone test. For simplicity, assume that the nonlinearity of the system is a simple saturation characteristic such as an S-curve of the input/output relationship of an amplifier. Under this odd-symmetry condition, the Volterra coefficients are only a few odd-order coefficients, perhaps the third- and fifth-order, sufficient to cancel odd harmonic distortion components. For this test to be meaningful in the real world, the amplifier in this example would have to exhibit no time-varying characteristic or AC settling time issues that are a function of increasing nonlinearity as the sine wave test increases in frequency. Of course, this is usually not the case.

A single adaptation of the Volterra coefficients to that single tone at that single amplitude may result in a near-perfect cancellation of the harmonic distortion only under those conditions. It may produce a great-looking plot for a data sheet, however! Now, change the amplitude and frequency of incoming tone. One must re-adapt to a new set of Volterra coefficients. The situation gets far more complicated with two-tone testing. Now, the IM3 rises polynomially with incoming tone amplitude.

To find a suitable set of adapted Volterra coefficients, one would need to pre-train the system and come up with a set of best-fit coefficients that reduce the IM3 tones across their amplitude range. It gets more complicated yet with a variable spacing of the two tones a variable center frequency between the tones across the full Nyquist bandwidth of interest. In the above examples, it is assumed that the saturation of the input amplifier is the dominant source of distortion.

An on-chip scheme for linearizing an ADC under a single tone test has been demonstrated by Goodman et al.11 This includes dither injection to train and calibrate errors up to the fifth-order in the digital domain. It has been used in pipeline converters and effectively smooths out INL errors including those occurring at inter-stage breaks. A conceptual block diagram of this scheme is shown in Figure 3. Note that this approach is not all-digital and needs a digital-to-analog converter. It appears to be more accurate at lower frequencies, as it does not account for frequency dependent effects and loses its effectiveness as the input signal frequency is increased.

Figure 3

Figure 3 On-chip linearizer.10

Conceptually, what is needed is an all-digital approach, where the digitized RF signal is digitally post distorted and then adaptively correlated in a digital feedback loop with the desired signal. Such an approach would not require dither and would be less frequency dependent. Goodman proposed such a model,11 and a general conceptual diagram is shown in Figure 4.

Figure 4

Figure 4 Goodman’s all-digital generalized NLEQ conceptual diagram.11

All the highest frequency RF ADCs are of the interleaving variety. An interleaved ADC is composed of N branches of single ADCs operating at 1/N of the highest sample rate. Each branch ADC may have a slightly different sample phase error and amplitude error from its ideal position. The ADCs will normally have some interleaving correction algorithm running either at the foreground at start-up to calibrate, or in the background if the application allows for it. No matter how precise the interleaving calibration, it is never ‘perfect’ and the imperfections show up in the spectral domain as spurious components. Therefore, the total unwanted spurious artifacts of an RF ADC are a combination of harmonic distortion and interleaving mismatches.

Each ADC core additionally has its own DNL and INL characteristic, i.e., they are not the same from branch to branch. If one combines all the nonidealities together and all the sources of nonidealities, it begs the question as to how to train the ADC with NLEQ to reduce the spurs and improve the net result SFDR of the converter.

If the application of the ADC was well constrained to a specific frequency band, bandwidth and amplitude range and a specific type of modulation signal, it is likely that the ADC nonlinearity could be trained with a general Volterra series long enough to estimate enough meaningful coefficients to improve the performance of the ADC.

Goodman11 proposed an interleaved version of his model, and a conceptual diagram is shown in Figure 5. Each branch has its own NLEQ, so the computations of the finite impulse response (FIR) filters are polyphase and run at 1/N of the RF sample rate. Experimental results from Goodman11 show 20 dB of SFDR improvement but there is a severe computational complexity cost. Approximately 200 multiply/adds per RF sample are needed, verified using actual commercial ADCs.

Figure 5

Figure 5 Goodman’s all-digital polyphase NLEQ conceptual diagram.11

It takes thousands of training passes consisting of two- or three-tone tests to accumulate the optimum Volterra coefficients for the filters. The user should be aware that if the input signal conditions during the application do not match up with the training signals that determined the Volterra coefficients, then performance could be dramatically diminished.

The whole point of an extremely wideband RF ADC is to run at a highly oversampled condition to capture input signals in the most flexible digital manner possible. One must consider the assumptions of an initial training, including the center frequency of the carrier constellation relative to RF Nyquist, the bandwidth of the modulation, and the modulation characteristic itself.

Assume that the Volterra coefficient adaptive training was done at the high rate prior to the digital down-converter (DDC), say, 50 GSPS. Then roughly a 10 teraflops computation rate is needed. Now assume that the training was done at a decimated rate of 8 GSPS after the DDC. This would result in a 1.6 teraflops computation rate, far more practical to implement in hardware. The downside is that the down-converted and decimated signal throws away potentially valuable nonlinear information that would be valuable for an accurate adaptation.

For example, IM3 distortion from out-of-band blockers at higher frequencies as seen at the undecimated higher rate will potentially fold into the desired baseband and defeat the purpose of NLEQ. It seems that RF-rate NLEQ may be altogether unavoidable, which reverts back to the multi-teraflop computational complexity problem.

As for implementation, assume that the maximum multiply/accumulate rate for a dedicated hardware multiplier, such as 12 nm technology, is roughly 1 GHz. If it is assumed that part of the computations are in the FIR filters, and part in the least mean squared (LMS) adaptation and update steps, then there may be as many as 1,000 hardware multiply/accumulate units on-chip.

Some may view this as overly pessimistic. Assume that only the FIR filters are always running but the LMS adaptation update is turned off once it has ‘converged,’ then this results in a lower computational complexity problem during the actual application.

For example, assume 32 ADC branches each with a fifth-order polyphase NLEQ FIR filter, then there are 160 multiply/accumulate units per RF cycle, each running at 1/N rate, or maybe twice that number, resulting in a rate of at least 300 flops per RF cycle. Assume further that it takes roughly 20,000 transistors to implement a full 16 x 16 multiplier with a 36-bit accumulator. Then there are roughly 6 million transistors, not including the registers and state machines running the filters. The actual number for a dedicated NLEQ machine approaches 10 million transistors. That may become more reasonable as ADCs advance into a technology such as 3 nm.

At the time of this writing, there is one known example of an RF data converter that resides monolithically within a field programmable gate array (FPGA),12 and yet its sample rate is limited to 4 GSPS, far below the fastest RF data conversion chips sampling at 50 GSPS or higher.

A co-packaged FPGA and RF data converter has been offered by Intel.13 The RF data converter is based on the work of Hornbuckle,14 which disclosed that NLEQ is used to help produce an SFDR of 73 dB at 32 GSPS. Neither Intel nor Hornbuckle provides any information on the NLEQ processing. As for the performance, there is no ‘before’ and ‘after’ NLEQ comparison. It would be important to know what the ‘raw’ ADC produces with NLEQ turned off.

More questions yet remain as to the SFDR improvement when input signal conditions are changed without re-training and adaptation. Also, neither Intel nor Hornbuckle provide any details on the additional power consumption required to run the NLEQ processing in real time or on the requirements of the FPGA needed for the NLEQ processing. The Hornbuckle chip has its origins in an RF ADC chip from Kull et al.15 where the reported SFDR was 46 dB.

If NLEQ is factored into the system, a new metric is needed for FOM and SWAP-C, which includes the power consumption and cost factor of the NLEQ under a given condition and a given level of performance improvement. If NLEQ processing resides in a separate FPGA, the power consumption of the FPGA, as well as the cost, may be out of reach for most applications.

The highest-rate RF data conversion chips consume at least an order of magnitude less power than the most advanced FPGAs, e.g., 15 W vs. 150 W. The FOM of the combined parts is an order of magnitude greater than the data converter alone. The cost is likely far more than the linear sum of a separate RF data converter and an FPGA, due to packaging, testing and yield complexities. This implies that the SWAP-C also rises due to power and cost.

It also implies the combined parts will need special heat dissipation packaging with a fan, like an FPGA, where an RF data converter alone may not need a fan, per se. Even if power and cost were acceptable, a customer would have to experiment with training and adaptation for the particular bandwidth and modulation required. Assuming these power obstacles were tenable, then the cost of integrating an FPGA with an RF data converter into a net cost-effective lower-power product is a major challenge.

The power and cost factors improve with technology scaling, but ultimately an NLEQ computational engine will need to be on-chip with the ADC, made possible in a fine line technology node such as 3 nm. Since the computational engine executing the NLEQ would be in a dedicated architecture on-chip with the RF ADC, it would be far more efficient than if it were executed in an FPGA or even a separate ASIC. The high speed RF digital data sampled by the ADC would not need to be bussed off-chip.

The final hurdle will be what the customer must do to apply NLEQ to a system solution. Since the customer knows the application best, e.g., the constraints of the sampling frequency, the bandwidth, the type of modulation and interfaces, will the customer develop the procedure to train and adapt the NLEQ?

Finally, there are numerous patents or applications on NLEQ as applied to RF ADCs and analog receivers. A sampling is provided in the references.16-21


Our understanding of RF ADCs and their relevant performance metrics has evolved to the point where simple sine wave testing for ENOB and SFDR is no longer adequate to describe their behavior in a meaningful and comprehensive way. It is altogether too easy to apply simple adaptive methods that remove nearly all unwanted spectral artifacts from single- and two-tone input tests to produce nearly perfect SFDR results.

Such NLEQ adaptation is dependent on both frequency and bandwidth. Change these parameters and then re-adapt! Make it general enough to cover all useful frequencies and bandwidths and input modulation types, and the result will be a computational complexity level that is too high for most applications.

The question remains, is the SWAP-C is worth the processing penalty to the customer? In some cases it is, while in others it is not. The training for a given application must be mastered by the customer and it will not be one situation that fits all. Transparency is urged for the suppliers of these new types of RF ADCs to better inform their customers of these complex tradeoffs.

The first supplier to conquer the overall monolithic solution of ADC+NLEQ that eliminates most of the nonlinearities in an otherwise uncompensated system, and does it in a process such as 3 nm technology, will be a potential winner.


  1. S. Norsworthy, “RF Data Conversion for Software Defined Radios,” IEEE 20th Wireless and Microwave Technology Conference (WAMICON), April 2019.
  2. “IEEE Standard for Terminology and Test Methods for Analog-to-Digital Converters,” IEEE Std 1241-2010 (Revision of IEEE Std 1241-2000), January 2011, pp.1-139.
  3. W. Kester, “ADC Noise Figure – an Often Misunderstood and Misinterpreted Specification,” Analog Devices, MT-006 Tutorial, Rev. B, April 2014, pp. 1-9.
  4. A. Buchwald, “Specifying and Testing ADCs,” IEEE ISSCC, Tutorial, February 2010.
  5. A. Arrants, B. Brannon and R. Reeder, “Understanding High Speed ADC Testing and Evaluation,” Analog Devices, Application Note AN-835 Rev B, pp. 1-28.
  6. J. Karki, “Calculating Noise Figure and Third-Order Intercept in ADCs,” Texas Instruments Data Acquisition Journal,  2003, pp. 1-16.
  7. B. Annino, “SFDR Considerations in Multi-Octave Wideband Digital Receivers,” Analog Dialogue, Vol. 55, No. 1, January 2021. 
  8. T. Neu, “Clocking the RF ADC: Should You Worry About Jitter or Phase Noise?,” Texas Instruments Analog Applications Journal,  2017.
  9. B. Murmann, “ADC Performance Survey 1997-2022.” Web. https://web.stanford.edu/~murmann/adcsurvey.html,
  10. A. M. A. Ali, H. Dinc, P. Bhoraskar, S. Bardsley, C. Dillon, M. McShea, J. P. Periathambi and S. Puckett, “A 12-b 18-GS/s RF Sampling ADC With an Integrated Wideband Track-and-Hold Amplifier and Background Calibration,” IEEE Journal of Solid-State Circuits, Vol. 55, No. 12, December 2020, pp. 3210-3224.
  11. J. Goodman, B. Miller, M. Herman, G. Raz and J. Jackson, “Polyphase Nonlinear Equalization of Time-Interleaved Analog-to-Digital Converters,” IEEE Journal of Selected Topics in Signal Processing, Vol. 3, No. 3, June 2009, pp. 362-373.
  12. “Zynq UltraScale+ RFSoC,” Xilinx, Web. www.xilinx.com/products/silicon-devices/soc/rfsoc.html.
  13. “Eagle Summit” FPGA with Integrated RF Data Converter,” Intel, Web. www.intel.com/content/www/us/en/architecture-and-technology/programmable/analog-rf-fpga.html.
  14. C. Hornbuckle, “Ultra-High Speed Analog-to-Digital Converters in 14nm FinFET Process and Usage in Digital and Hybrid Phased Array Systems,” GoMACTech Conference, March 2018, pp. 504-510.
  15. L. Kull, D. Luu, C. Menolfi, M. Braendli, P. Francese, T. Morf, M. Kossel, A. Cevrero, Ilter Özkaya and T. Toifl, “A 24-72-GS/s 8-b Time-Interleaved SAR ADC with 2.0-3.3 pJ/Conversion and > 30 dB SNDR at Nyquist in 14-nm CMOS FinFET,” Journal of Solid State Circuits, Vol. 53, No. 12, December 2018, pp. 3508-3516.
  16. G. M. Raz and C. P. Chan, “Method and System of Nonlinear Signal Processing,” U. S. Patent 7 609 759, October 27, 2009.
  17. R. J. Velazquez and S. R.Velazquez, “Adaptive Digital Receiver,” U. S. Patent 9 118 513, August 25, 2015.
  18. S. R.Velazquez and R. J.Velazquez, “Linearity Compensator for Removing Nonlinear Distortion,” U. S. Patent 9 160 310, October 2015.
  19. S. R.Velazquez, “Compensator for Removing Nonlinear Distortion,” U. S. Patent 9 705 477, July 2017.
  20. S. R. Velazquez and Y. Wang, “Multi-Dimensional Compensator,” U. S. Patent 10 911 029, February 2021.
  21. H. H. Kim, A. Megretski, Y. Li and K. Chuang, “Digital Compensation for a Non-Linear Analog Receiver,” U. S. Patent 9 564 876, February 2017.