Microwave Journal
www.microwavejournal.com/articles/32951-characterizing-uncertainty-in-s-parameter-measurements

Characterizing Uncertainty in S-Parameter Measurements

October 10, 2019

One might ask why engineers should expand their S-parameter measurement practices to include uncertainties, since they have been largely ignored until now. The answer lies mainly in the advancement of technology: as new technologies emerge and are introduced as standards, the specifications and requirements for products get tighter, especially with increasing frequency. This trend can be seen not only with systems, but also at the component level, including amplifiers, filters and directional couplers. Therefore, engineers responsible for the design and production of these components need to increase the confidence in their measurements and product characterization.

Imagine the following: an engineer designs an amplifier requiring a minimum gain over a frequency bandwidth. The amplifier is measured and meets the specification. A few hours later, the amplifier is remeasured and no longer meets the specifications at the high end of the frequency band (see Figure 1). Why is the amplifier not meeting the specification? There could be many reasons: the measurement system drifted, someone in the lab moved or damaged one of the cables in the measurement setup or one of many other possibilities, including doubts about the design, fabrication or stability of the product.

Figure 1

Figure 1 Amplifier gain measurements at two times: the first in spec, the second out of spec at the upper band edge.

If it is that easy to take two measurements and obtain different results, how can one know which measurement is correct? The confusion arises from not characterizing and including the uncertainties in the measurement, which ultimately leads to an overall lack of confidence in the results. Careful engineers use methods to validate a setup before taking measurements. More careful users test “golden devices” - those with similar characteristics to the actual device under test (DUT) - as a validation step and reference internal guidelines to decide whether the data is good enough. While this is a step in the right direction, how are these guidelines defined? Are the guidelines truly objective, or is subjectivity built in? How close is close enough? Uncertainty evaluation is a powerful tool allowing users to both validate vector network analyzer (VNA) calibration and properly define metrics for golden devices before taking measurements. Figure 2 illustrates this, showing the same amplifier gain measurement with the uncertainties of the system.

Figure 2

Figure 2 Amplifier gain measurement showing measurement uncertainty, calculated using Maury MW Insight software.

Uncertainties

Every measurement, no matter how carefully performed, inherently involves errors. These arise from imperfections in the instruments, in the measurement process, or both. The “true value of a measured quantity” (atrue) can never be known and exists only as a theoretical concept. The value that is measured is referred to as “indication” or (aind), and the difference between the true value and the measurement indication is the error:

Since the true value is unknown, the exact error e in the measurement is also unknown. There are two types of errors:

Systematic errors: In replicate measurements, this component remains constant or varies in a systematic manner and can be modeled, measured, estimated and, if possible, corrected to some degree.1 Remaining systematic errors are unknown and need to be accounted for by the uncertainties.

Random errors: This component varies in an unpredictable manner in replicate measurements.2 Some examples are fluctuations in the measurement setup from temperature change, noise or random effects of the operator. While it might be possible to reduce random errors - with better control of the measurement conditions, for example - they cannot be corrected for. However, their size can be estimated by statistical analysis of repetitive measurements. Uncertainties can be assigned from the results of the statistical analysis.

In general, a measurement is affected by a combination of random and systematic errors; for a proper uncertainty evaluation, the different contributions need to be characterized. A measurement model is needed to put the individual influencing factors in relation with the measurement result.3 Coming up with a measurement model that approximates reality sufficiently well is usually the hardest part in uncertainty evaluation. Propagating the uncertainties through the measurement model to obtain a result is merely a technical task, although sometimes quite elaborate. Finally, the measurement result is generally expressed as a single quantity or estimate of a measurand (i.e., a numerical value with a unit) and an associated measurement uncertainty u. This procedure, described here, is promoted by the “Guide to the expression of uncertainty in measurement” (GUM),4 which is the authoritative guideline to evaluate measurement uncertainties.

S-Parameters and VNA Calibration

How do these concepts apply to S-parameter measurements? Recall that S-parameters are ratios of the incident (pseudo) waves, denoted by a, and reflected (pseudo) waves, denoted by b:

The definition of S-parameters implies a definition of reference impedance.5-6 The most common measurement tool used to measure S-parameters is a VNA. While different VNA architectures exist, the most common versions for two-port measurements use either three or four receivers.5-7

To simplify the understanding of the subject, consider a one-port VNA measurement (see Figure 3). The case for two-port or more general N-port measurements can be obtained through generalizations, as shown in the literature.7 Figure 3a shows a typical setup, where a VNA, cable and connectors are used as a measurement system to measure a DUT. To evaluate uncertainties in the S-parameter measurements, a measurement model first needs to be established, to describe the relation between the output variables, the incident and reflected waves at a well-defined port (i.e., the reference plane), and the indications at the VNA display (i.e., the raw voltage readings of the VNA receivers). These models should include systematic as well as random errors to increase confidence in the results. Not estimating systematic errors correctly leads to inaccurate measurements. On the other hand, wrong estimates of the random errors can either degrade the precision of the result or indicate the results are precise when they are not.

Figure 3

Figure 3 One-port measurement hardware setup (a), systemic error model (b) and signal flow graph (c).

Classical VNA Error Model

VNA measurements are affected by large systematic errors which are unavoidable and inherent to the measurement technique, related to signal loss and leakage. They establish a relation between the indication (measured)

and the S-parameter at the reference plane

shown by the signal flow graph of Figure 3c. The error box consists of three error coefficients: directivity (E00), source match (E11) and reflection tracking (E01). The graphical representation in Figure 3b can be transformed into a bilinear function between the indications and S-parameters at the reference plane through the three unknown error coefficients. To estimate the unknown error coefficients of the model, three known calibration standards must be measured for the one-port case, more if multiple ports are involved. After estimating the error coefficients, any subsequent measurement of raw data (i.e., indications) can be corrected. This technique is commonly referred to as VNA calibration and VNA error correction.

Different calibration techniques have been developed to estimate the error coefficients. Some require full characterization of the calibration standards, such as short-open-load (SOL) or short-open-load-thru (SOLT), while others require only partial characterization, such as thru-line-reflect (TRL), short-open-load-reciprocal thru (SOLR) and line-reflect-match (LRM) for two-port calibrations.8 Even if the calibration standards are characterized, they are not perfectly characterized, and the error associated with the characterization will increase the inaccuracy of the estimated error coefficients: directivity, source match and reflection tracking.

Engineers have developed experimental techniques to estimate these residual errors (i.e., residual directivity, residual source match and residual reflection tracking). Connecting a beadless airline terminated with a reflection standard to the calibrated port enables the residual errors to be observed as a superposition of reflections versus frequency. In the frequency domain, this implies ripples in the reflection coefficient (see Figure 4). Due to the characteristic pattern in the frequency response, the method is referred to as the “ripple method,” where the magnitude of the ripples is used to estimate the residual errors and uncertainties related to directivity and source match. This method has various shortcomings: it is unable to determine the residual error in tracking and requires handling air-dielectric lines, which becomes impractical as frequency increases.7

Figure 4

Figure 4 Source match after one-port calibration using Maury MW Insight software.

Residual errors have been used to gain confidence in the measurement based on experience. The challenge is to understand what a residual directivity of 45 dB means if a DUT with 36 dB return loss is measured. However, the uncertainties of the error coefficients are not reliable when estimated with the ripple method, and they are insufficient to gain confidence in the measurement results. The classical VNA error model is thus incomplete to perform VNA calibration and VNA error correction with uncertainty evaluation.

Adding Uncertainties to the Classical VNA Error Model

This section explains how to expand the classical VNA error model into a full measurement model by adding the other factors influencing the measurement. Using such a full model, the uncertainties can be evaluated in a direct and conceptually clear method. The measurement setup leading from the calibration reference plane to the receiver indications contains several sources of error and influence factors that contribute to the total uncertainty. The classical VNA error model can be expanded to include these factors, becoming a full measurement model. Typical components include the VNA (e.g., linearity, noise and drift), cables, connectors and the calibration standards. The European Association of National Metrology (EURAMET) recommends the model shown in Figure 5, where the traditional error coefficients are identified by the E block and the other influence factors represented by the R, D and C blocks.7,9 The full model in the figure contains just the building blocks, which are further refined using signal flow graphs. Without going into the details of these models, the main errors and related signal flow graphs are described.

Figure 5

Figure 5 VNA measurement model.

Cable and Connector

Cables are used between the reference plane and the receiver indications, making them part of the calibration. They are subject to environmental variations, as well as movement and bending. When cables are moved or bent during calibration or DUT measurement, the error coefficients are expected to change. The cable model uses two parameters: cable transmission (CAT) and cable reflection (CAR), shown in Figure 6a. While cable suppliers typically specify these values in cable assembly datasheets, the cables should be characterized for the typical range of flexure or movement during calibration and measurement.7

Similarly, the connectors used for connecting and disconnecting the calibration standards and DUT affect the reference plane, based on how repeatable the pins and fingers are designed and built. The S-parameter response of a device differs each time it is connected, disconnected and reconnected, which is modeled by one parameter, the connector repeatability (COR).

Figure 6

Figure 6 Models for the cable and connector (a) and VNA noise, linearity and drift (b).



VNA

The receivers in the VNA tend to deviate from linear behavior at high input power levels. Nonlinearity is essentially a systematic error that can be corrected using an appropriate nonlinear model. Since the nonlinear behavior may be different for each receiver and modeling each is impractical, the non-linearity is approximated with a linear model, denoted as L in Figure 6b.

Noise is a random error and encompasses unpredictable fluctuations in the indications of the VNA. The noise influence is divided into the noise floor (NL) and trace noise (NH), where the noise floor is observed without any source signal, and the trace noise scales with the applied source signal level.

Drift accounts for changes in the performance of the entire measurement system over time, due to thermal and other environmental effects. A simple model associates a drift value (D00, D11, D01) to each error term, as shown in Figure 6b.

Calibration Standards

The calibration standards need to be characterized, including their associated uncertainties (shown as block S in Figure 5). Depending on the level of accuracy required, this can be obtained from the manufacturer, a calibration laboratory or a national metrology institute, with the characterization traceable to SI units.10 It has been demonstrated that coaxial calibration standards can be characterized more accurately and more consistently by including the effects of the connectors in the characterization.11 When performing the VNA calibration to estimate the error coefficients, these uncertainties are propagated together with the other contributions to the error coefficients.

Once all the sources of error and influences are modeled and estimated, VNA calibration and error correction can be performed. Uncertainty contributions are propagated through the full measurement model to the measurement results. This will be sufficient to have confidence in the measurement if the following conditions are met:

  • All sources of significant errors and influences are included in the models (see the error models described previously).
  • The sources are estimated realistically, i.e., these errors are characterized based on the real measurement conditions; in some cases, supplier specifications may not be sufficient.
  • The calibration standards are characterized accurately with realistic uncertainties.

The first condition is usually satisfied for most measurement setups. The second depends mostly on the operator estimating the uncertainties, and the third depends on the source characterizing the standards.

Using this approach will enable engineers to determine an uncertainty budget and the major contributions to the overall uncertainty. This is a powerful tool because it shows where to improve system accuracy if the uncertainty is too high. To illustrate, in the amplifier measurement (see Figure 7 and Table 1), cable stability and connector repeatability represent more than 90 percent of the total uncertainty.

Figure 7

Figure 7 Total amplifier gain measurement uncertainty, calculated using Maury MW Insight software.

Verification and Validation Tool

Table 1

Several methods and techniques are used to validate a calibration. Some use T-checkers or Beatty standards, others use pre-characterized verification standards. The quality of a calibration can be “bad” due to sources of error, such as mixing standards, damaged standards and cables, loose connections or sudden noise in the system due to environmental changes. Since these significant sources of error are usually not accounted for in the characterization of the uncertainty contributors, they will not be considered in the uncertainty budget, which will degrade the quality of the calibration and, hence, measurement accuracy.

This section addresses verification devices, because they enable the validation of the calibration accuracy and estimate the level of precision achievable. Verification per the International Vocabulary of Metrology (VIM) definition12 provides objective evidence that the calibration fulfills specified requirements; however, as these requirements can be specified quite arbitrarily, more important than the verification is the validation13, which is the verification whose specified requirements are adequate for measuring the devices intended for measurement.

Most of the current verification devices are not characterized with uncertainties, and it is difficult for the user to specify an adequate requirement for the validation. In most cases, the user compares the reference characterization with the actual measurement and estimates how close the two are. This is quite subjective, as shown in Figure 8a, which shows a difference in magnitude; the question is whether this is sufficient. Had the results included uncertainties, the user could proceed more systematically and quantitatively as follows:

  • Choose a verification standard which has been previously characterized with uncertainties and is representative of the measurement. For example, a fixed load different than one used as a calibration standard can be selected for a one-port, low reflection measurement.
  • Validate that the uncertainties of the setup are not too large by 1) comparing the setup uncertainty with the uncertainty provided by the manufacturer of the verification device; 2) comparing the setup uncertainty at the 95 percent confidence level with the design tolerance of the DUT. The expanded uncertainty for the 95 percent confidence interval should always be smaller than the design tolerance; and 3) if the uncertainties do not satisfy the above two conditions, re-evaluate the VNA, cable, connector and the calibration kit used for the calibration.
  • A normalized error can be used to finalize the validation,7 where the scalar version is defined by:

Where dˆ is the estimate of the difference between the measurement and verification device and u(dˆ) is the estimate of the standard uncertainty of the difference. The factor 1.96 corresponds to a 95 percent coverage condition, which is quite common in conformity assessment. Figure 8b shows the uncertainties of the same amplifier measurement from Figure 8a. Areas of insufficient overlap of the two uncertainties result in values of e>1 and indicate a failed verification.

Figure 8

Figure 8 Comparison of “golden” device and user measurements (a); same data showing measurement uncertainty (b).

Conclusion

As technologies evolve and requirements become more challenging, implementing processes that increase confidence in measurements and ensure accurate and reliable characterization - and product performance - are critical. Characterizing and quantifying measurement uncertainty is one such process to achieve the desired results. Uncertainty can aid in definitively verifying a VNA calibration before measuring a DUT. Uncertainty can help understanding how the various components in a measurement system impact the overall uncertainty of the DUT measurement. Identifying, quantifying and reducing the major sources of uncertainty in a test setup will improve the accuracy of the overall measurement. Referring to the original amplifier scenario shown in Figure 1, quantifying measurement uncertainty can provide the confidence that the true performance of the DUT is reflected in the measurements, and the design will not pass one test and fail another.

References

  1. Clause 2.17, “VIM: International Vocabulary of Metrology, Third Edition,” JCGM 200:2012, www.bipm.org/en/publications/guides/vim.html.
  2. Clause 2.19, “VIM: International Vocabulary of Metrology, Third Edition,” JCGM 200:2012, www.bipm.org/en/publications/guides/vim.html.
  3. Clause 2.48, “VIM: International Vocabulary of Metrology, Third Edition,” JCGM 200:2012, www.bipm.org/en/publications/guides/vim.html.
  4. Evaluation of Measurement Data, “Guide to the Expression of Uncertainty in Measurement,” JCGM 100:2008, www.bipm.org/en/publications/guides/gum.html.
  5. M. Hiebel, “Fundamentals of Vector Network Analysis,” Rohde & Schwarz, 2007.
  6. J. P. Dunsmore, “Handbook of Microwave Component Measurements,” 2014.
  7. M. Zeier, D. Allal and R. Judaschke, “Guidelines on the Evaluation of Vector Network Analyzers (VNA), EURAMET Calibration Guide, No. 12, Version 3, 2018, www.euramet.org/publications-media-centre/calibration-guidelines/.
  8. “Improving Calibration Accuracy and Speed with Characterized Calibration Standards,” Maury Microwave App Note 5C-090.
  9. M. Zeier, J. Hoffmann, J. Ruefenacht and M. Wollensack, “Contemporary Evaluation of Measurement Uncertainties in Vector Network Analysis,” Cal Lab Magazine, October–December 2018, pp. 22–31, www.callabmag.com/contemporary-evaluation-of-measurement-uncertainties-in-vector-network-analysis/.
  10. Clause 2.41, “VIM: International Vocabulary of Metrology, Third Edition,” JCGM 200:2012, www.bipm.org/en/publications/guides/vim.html.
  11. M. Zeier, J. Hoffmann, P. Hürlimann, J. Rüfenacht, D. Stalder and M. Wollensack, “Establishing Traceability for the Measurement of Scattering Parameters in Coaxial Line Systems,” Metrologia 55, January 2018, S23–S36.
  12. Clause 2.44, “VIM: International Vocabulary of Metrology, Third Edition,” JCGM 200:2012, www.bipm.org/en/publications/guides/vim.html.
  13. Clause 2.45, “VIM: International Vocabulary of Metrology, Third Edition,” JCGM 200:2012, www.bipm.org/en/publications/guides/vim.html.