- Buyers Guide
MIMO Expert Forum at CTIA 2011 – Top Ten Questions….and Answers!
To learn more, attend this year’s MIMO Expert Forum on Thursday, May 10, from 12:00 - 2:00 pm at CTIA 2012. This year’s Forum features the same expert speakers as last year from Agilent Technologies (Moray Romney), Elektrobit (Jukka-Pekka Nuutinen), ETS-Lindgren (Dr. Michael Fogelle), and Spirent Communications (Doug Reed) for a live and webcast session. For more information and to register
1. What are the metrics and methodologies being considered for OTA testing of MIMO devices?”
MF: There are three primary test methodologies being considered for MIMO OTA testing, each with their own pros and cons as well as various variants on each. The first is an OTA multipath RF environment simulator where the device is isolated from the real world in an anechoic chamber and an array of antennas are used to produce a boundary condition around the device, simulating various angles of arrival for the test signals. Advanced channel emulation with spatial channel models provides the simulation of the real world environment outside the immediate vicinity of the device under test (DUT). This method, commonly referred to as the “anechoic chamber method,” although something of a misnomer, offers the potential of flexible complex environment simulation subject to the limitations of the available channel emulation. Variants exist in terms of the number of elements in the array and the corresponding channel emulation required. The second method is the reverberation chamber or reverb method. A reverb chamber is a metal box with lots of multipath and various mode stirring paddles to move the modes around, randomizing the field structure. The net effect is a statistically uniform field from all directions. The system provides plenty of complex multipaths, making it an ideal environment for obtaining MIMO operation from a DUT. However, this rich multipath environment is something of an edge case in terms of real world performance, and the propagation delays involved do not look much like a typical LTE network. To combat this, variants that incorporate channel emulation to introduce longer delays and fading behavior using a channel emulator are being investigated. The third method expands on a traditional laboratory technique for evaluating the performance of antennas and radios during research and development. By capturing the vector pattern information containing magnitude and phase information for each antenna in the device and embedding that in a channel model for a conducted test of the radio, a reasonable estimate of the expected performance can be achieved. The traditional methods of capturing these vector patterns involve performing passive tests of the antennas with cabled feeds. This requires modifying the device and introduces potential cable interactions and other issues that are normally avoided in OTA testing. To combat this, this two-stage method uses data collected by the radio chipset itself to determine the antenna magnitude and relative phase patterns. While not a traditional OTA test from the standpoint of being an end-to-end evaluation of the device in over-the-air operation, it will depend on the metric to be evaluated as to the overall suitability of the methodology. As far as metrics are concerned, it is now rather universally agreed that total throughput is the principal metric of interest. Whereas traditional TRP and TIS testing are about edge-of-link performance, the MIMO performance is really about edge of bandwidth performance. That is, when will the available throughput drop below an acceptable level of service. However, even with that determination, there are still open questions with regard to what throughput level is to be evaluated. Discussions include whether this should be throughput in the presence of a known interference level vs. the self-interference of the platform, more akin to traditional TIS testing. The metric could also either be a single fixed level test, or some form of sensitivity search to determine the power level producing a target throughput level.
2. How does the number of antennas in the ring of probes method impact the size and quality of the quiet zone usable for testing at the center of the chamber?
MF: For an RF environment simulator consisting of an array of antennas in an isolated (anechoic) environment, the quality of the field structure in the test volume is primarily a function of the resolution of the active antennas in the boundary array, and not the total number of antennas used. While for a totally versatile and reconfigurable system, an evenly spaced array makes sense, the same number of antennas can be re-arranged into localized clusters to produce a given target environment at a much better resolution. The traditional definition of a quiet zone as the usable test volume does not have the same meaning for an environment simulation array. The array is necessarily larger than any far-field definition of R = 2D2/l, since the array diameter D is by definition twice the range length R! Thus, it’s unclear how the traditional definition of a quiet zone applies at all, although the 1/R falloff of the electric field presumably has some impact. However, physics and experimental evidence shows that the array factor of the system tends to minimize this impact. Instead of referring to a quiet zone, we typically talk about a correlation volume, although it’s really more useful to think of it as a correlation distance. The correlation distance indicates how far apart two antennas can be placed and still see the same correlation behavior that would be seen in the real world environment we’re trying to model. For an evenly spaced eight antenna circular array (45° resolution), this corresponds to a correlation distance of approximately 0.7l and scales roughly linearly from there. Note however that this correlation distance can be anywhere within a potentially much larger test volume.
3. How is field mapping used to validate the MIMO OTA system?
MF: For an RF environment simulator consisting of an array of antennas in an isolated (anechoic) environment, the quality of the simulation is a function of the resolution of the bounding array and the calibration of the system. In an ideal free space condition, an infinite number of plane waves converging on a single point would create a very predictable standing wave pattern, equivalent to that of an isotropic radiator producing spherical wave fronts emanating from a single point. This is easier to visualize in two dimensions as the ripples caused by a drop of water on the surface of an infinite lake. The strongest ripple is in the center and drops off as the wave radiates away. If the drop were placed in the exact center of a perfectly round cup, the waves would propagate to the boundary and reflect back to recreate the exact same wave pattern. However, if dropped among a circular array of evenly spaced sticks, a more interesting reflection pattern would be created, while the very center would maintain its circular symmetry. This is what happens in an Over-the-Air (OTA) simulator, where a portion of the field structure will match the desired field structure that would occur in free space, and the rest will break down into a more complicated interference pattern. By using an X-Y positioner or a linear positioner on a turntable, we can move a probe antenna through the test volume and map the fields to determine how close we come to the predicted ideal. We can also calculate the expected field structure for a given resolution, so we know when the result will deviate due to the resolution of the boundary condition. Any additional deviation between the measured and theoretical results provides an indication of the error contribution due to calibration and/or unwanted reflections in the test volume.
4. How are Doppler shift and antenna coupling modeled?
JP: Channel realizations are generated with the geometrical principle by summing contributions of rays (plane waves) with specific small scale parameters, such as delay, power, angle of arrival (AoA) and angle of departure (AoD). Superposition results in correlation between the antenna elements and temporal fading with geometry dependent Doppler spectrum. Now, in MIMO OTA, the Doppler is modeled as it would be modeled in a normal channel model as described earlier. The only thing we do is to map the local environment of the receiver in an angular domain.
The antenna coupling is also pretty straightforward. The DUT end is taken automatically (that is the purpose of the measurement - to see what the effect is of the coupling). The transmitter end is handled by multiplying the complex antenna gain pattern with impulse response realizations.
5. How do you calibrate the OTA systems?
MF: There are three basic steps to calibrating an RF environment simulator that consists of an array of antennas in an isolated (anechoic) environment and fed by an RF channel emulator. Such a system is designed to produce a known channel model between the transmit ports of a base station emulator/communication tester and the center of the test volume within the chamber. However, contributions such as cable loss in and out of the channel emulator, amplifier gain, antenna gain, and range path loss all serve to alter the channel model generated by the channel emulator. Thus, it is necessary to pre-correct the inputs and outputs of the channel emulator to adjust for the relative differences for each path in or out of the unit. The first step is to normalize the inputs by measuring the relative path loss between each input to the channel emulator. This relative offset is used to adjust the reference level of each RF input to rebalance the signals entering the emulation stage of the channel emulator. After applying this correction, one input may be used to normalize all of the outputs, again finding the relative path loss for each path to the center of the test volume. By applying corresponding offsets to each output of the channel emulator, effectively causing each channel model output to have the same path loss to the center of the test volume. The final step is to correct for the net end-to-end path loss of the system, accounting for the input and output losses, as well as the internal losses inherent in the given channel model and the array gain corresponding to the number of antennas feeding the test volume.
6. Which method is best?
MR: This is an important question that has different answers depending on the criteria being chosen to make the value judgment. The two main dimensions along which this can be answered are technical and commercial. On the technical front, the method which can most closely approximate the desired test conditions would be considered the best. At this point in time no method has been validated for absolute accuracy against known antennas and so the factors that may or may not influence the results are not yet fully defined. Obvious parameters like power accuracy will have a direct impact on results, but more subtle factors such as the size of the “quiet zone” at the center of the test volume in which the required signal correlation has been achieved is a much more difficult thing to validate. During the coming months, reference antennas will be used to help answer some of these issues. On a commercial level, the best method might be categorized as the most cost effective or affordable. Most of the alternatives to the ring of probes method are motivated by the desire to lower the cost of the final solution with an associated loss in test coverage. The degree to which the test system can be simplified is not yet fully quantified. For instance, can a reverberation chamber provide sufficient spatial diversity to distinguish good from bad or not? Such questions will be answered when reference antennas are used to validate the different test methods.
7. Which is going to be “standard”?
MR: It is too early to say which method will be standardized. The 2D implementation of the ring of probes, or a simplified single or multi-cluster smaller array of probes, seems likely candidates for initial standardization excluding any cost arguments. A full 3D implementation of a sphere of probes is the technical “home run” since it would be capable of emulating any 3D channel but this would be prohibitively expensive. Full 3D is not yet proven to be necessary and may present practical technical limitations such as back scatter from the large number of probes required. The two-stage method can deal with the 2D/3D issue, but is not currently defined for measuring self interference. The significance of this and possible alternative ways of measuring self interference are under study. The reverb chamber methods naturally handle 3D (and can’t do 2D) but doesn’t provide specific spatial control. Whether this is a technical showstopper remains to be proven in simulation and validation using reference antennas. If limited spatial control is seen not to be an issue then reverberation chambers could also be standardized. For all these reasons and others it may be decided that more than one test method is capable of returning comparable or traceable results to other methods which may result in more than one method being standardized.
8. What are the interference challenges?
MR: Along with the spatial and correlation attributes of the wanted signal are the properties of the unwanted signal. In any real life environment it is the non ideal Signal to Interference plus Noise Ratio (SINR) that limits spatial multiplexing gain. With an infinite SINR, any receiver could decode multiple streams provided the signals arriving at each antenna were fractionally different. However, in the presence of interference (or noise), the ability of the receiver to decode multiple streams decreases. So the characteristics of the interference are almost as important to define as the signal itself. Traditionally, receiver performance has been characterized using isotropic white Gaussian noise. This is a good model of Code Division Multiple Access (CDMA) network interference for a cabled test that has no spatial content; however, in an OTA context the spatial characteristics of interference may be significant. In most instances, interference will be arriving from a different direction than the signal, which favors an antenna that has good spatial diversity. Testing such an antenna with isotropic interference, or matching the spatial characteristics of the interference to that of the signal would not provide an accurate measure of the true performance of the antenna. In Orthogonal Frequency-Division Multiplexing(OFDM) systems, there are additional dimensions to the interference which include its frequency selectivity and temporal characteristics. In CDMA systems, the interference is constant across the channel and over time since it is the average of many User Equipment (UE), but in OFDM the possibility of orthogonal narrowband scheduling means that interference is also narrowband and statistical rather than constant and broadband. It will be through simulation and measurements that the significance of interference definition will be understood so that the correct environment is specified for MIMO OTA test.
9. Explain chipset standards (for amplitude/phase).
MR: One of the methods being studied by 3GPP, the two-stage method, requires support from the device chipset in order for the UE to measure its own antenna pattern. This is not an ideal situation since such specifications require some effort on the part of the standards body and UE developers, however, the upside is that once implemented, the test mode enables an alternative test method which has the potential to address arbitrary 3D environments at low cost. The proposed test mode requires that the UE returns the absolute power received from each antenna and the phase between the antennas. Power measurements are commonplace and are used for Reference Signal Receive Power (RSRP) reporting in faded conditions. For MIMO OTA, the test conditions for the pattern measurement stage are static, and the absolute accuracy is unimportant. The linearity of the power measurement is relevant although this can be linearized afterwards if necessary. The phase measurement has no existing equivalent but is a conceptually simple measurement that can be made by the UE with high accuracy. Early results from one chipset vendor in several different UE designs has shown excellent results for pattern measurement which have been used to emulate the results achieved using traditional anechoic chamber multi-probe methods.
10. How long will it take to finish the standard?
MR: CTIA has set a goal of completing a test method by mid 2013 although the driver is technical credibility rather than the date as such. 3GPP are about to launch the work item phase of their activity and at the time of writing the date for completion is not known. 3GPP may additionally choose to specify performance requirements using the chosen test methods. This final stage of standardization may take a considerable amount of time. For SISO, for example, it took 3GPP some three years after the test method was finalized before performance requirements were agreed between the network operators and the UE vendors. For MIMO OTA, the difficulties in agreeing on performance requirements are substantially more complex. This might mean that 3GPP follows CTIA’s plan to only specify a test method which will be used for device benchmarking rather than setting specific pass/fail limits.
To learn more, attend the MIMO Expert Forum at CTIA 2012 on Thursday, May 10 from 12:00 pm – 2:00 pm. For more information – including presentation titles, abstracts and speaker bios - visit MIMO Forum
Not attending CTIA 2012 in New Orleans?
Join the FREE Webcast instead, register at: MIMO Webcast