MWJ: Can you give us a little background on your company Simulation Technology and Applied Research Inc. (STAAR)?

JD: I have always had a strong interest in the development of electromagnetic (EM) analysis software, and had been toying with the idea of starting a software company since I first joined Lawrence Livermore National Laboratories back in 1987. A convergence of events in the mid 1990s created an opportunity for me to receive SBIR funding from the Department of Energy. So in 1997, I utilized the SBIR seed money to form STAAR and undertake the challenge of solving extremely large and highly complex 3D EM designs.

MWJ: In this month’s issue of Microwave Journal, you define a "big" problem as either its physical complexity or its required level of accuracy. Can you elaborate on this?

JD: The most practical definition of problem size is in terms of the number of mathematical operations needed to solve it, since this relates directly to how long it will take to get results on a computer. Most numerical methods for solving partial differential equations (such as the Maxwell equations) rely on subdivision of the model space into simple elements such as triangles or tetrahedrons. The field variation within each element is dependent on a few parameters with unknown values, and the unknowns are typically determined by solving a matrix equation with one row for each unknown. The computational challenge is the formation and solution of this matrix equation.

A problem becomes "big", that is, it takes a large number of mathematical operations to solve it, when either there are a large number of unknowns, or when the coupling between unknowns is extensive, or both. A large number of unknowns/equations is typically necessary to represent geometrically-complex structures, and also when modeling structures that are electrically large, meaning they contain many cubic wavelengths. When very high accuracy is required, more unknowns and more extensive coupling between unknowns are needed in order to capture subtle field variations, again leading to "big" calculations. Even problems involving relatively modest numbers of unknowns can require excessive computation if a large number of frequency points must be evaluated. The technology in Analyst is capable of solving these types of "big" EM problems using a variety of state-of-the-art numerical techniques on clusters and other parallel computing resources.

MWJ: Are their still challenges in solving big problems and if so, what are the limitations of current solutions?

JD: For discrete approaches such as the finite-element method (FEM), solving the matrix equation has largely been the limiting factor of EM analysis tools. Historically a problem became too big to solve when it could no longer be loaded and solved on a single computer. Over the years however, the introduction of spectral decomposition and multi-core PCs have eased this hardware centric limitation. Yet as more became possible on a PC, designers naturally also wanted to perform optimization and parametric studies.

One step forward, two back. For optimization, the matrix must be repeatedly formed and solved. Even if the problem can be solved on a single computer, it may take too much time to be practical. In such cases, it became necessary to efficiently distribute the computational work across multiple computers. Tailoring the type of problem being solved so that increasing the number of computers working on the problem will consistently reduce the amount of time needed for solution has been a major focus of the Analyst R&D team for more than 10 years now.

MWJ: Shared versus distributed memory has been a roadblock to other EM tools? What is different about Analyst?

JD: When multiple processors or computers are used to solve a problem, they typically use either a shared memory model in which a common pool of RAM is accessed by all the processors, or a distributed memory model in which each processor accesses only its private RAM. For all processors on a single board, whether separate processors, separate cores, or both, this shared-memory model can be relatively efficient. Individual cores can carve out portions of the system RAM for their own use and also access shared locations as needed. Shared memory applications are also relatively easy to write because there is typically no need to explicitly manage communication between processors. As a result, most current "parallel" solvers use this model. However, the shared memory model does not scale well because the processors share a common communication path to physical memory.

Computer boards can also support only a limited amount of memory, and using a shared memory model across multiple computers requires software emulation that is usually extremely inefficient. The distributed memory model used in Analyst circumvents these limitations by explicitly managing communication between processes. The approach is very scalable and allows a virtually unlimited number of computers to be efficiently used to solve a given analysis. It also provides broad flexibility in configuring the numerical method to optimally use a parallel computational resource such as a cluster.

MWJ: What is the largest number of unknowns that have been solved with Analyst? Is there any theoretical limit?

JD: We have solved problems with up to about 100 million unknowns using Analyst solvers on modest Linux clusters. However, the number of unknowns is not a good indicator of the amount of computational work needed to solve a sparse matrix equation like that obtained using FEM. For these types of matrices, a better indicator is the total matrix fill -- the total number of non-zero entries in the matrix. The total fill is given by the number of unknowns multiplied by the average row fill (the average number of non-zeros per row). Typical row-fills vary by more than an order of magnitude for the range of basis sets, from linear to quintic, supported in Analyst. This means that the amount of computational work needed to solve a given number of unknowns can vary by a factor of 10 or more depending on the choice of basis function order and type of matrix solver.

The number of unknowns that can be solved with Analyst is theoretically limited by the floating-point precision of the computer among other things, but practical considerations such as the amount of RAM available to each processor and the speed of a processor are the usual limiting factors. For typical cluster node configurations we can solve problems with between 100,000 and 1 million unknowns per node, depending on the finite-element basis set used. I have been involved with an Analyst simulation that has been run on up to 1024 processors on Department of Energy supercomputers.

MWJ: So how does this computing power translate into challenging design simulations?

JD: Well, we've just completed solving a 384-way power divider such as might be used in a phased-array antenna. The Analyst model for the device contains three layers of WR51 waveguide with irises at joints and tabs at "T"s to adjust how much power goes in either direction. The mesh contains just over 17 million elements and in our run all of the ports (except of course the feed port) were shorted. Simulation time was about an hour on a 12-node cluster with about 18 million unknowns. Since the acquisition of STAAR by AWR, we've been using Analyst to solve a variety of equally-complex problems and are achieving similarly impressive results.

MWJ: There are several 3D EM products on the market. Apart from solving a new class of large problems, what you say is another significant benefit of Analyst for engineers currently using some other 3D EM tool?

JD: Having access to Analyst within the design flow of Microwave Office® (MWO) software will make it possible to use highly-accurate 3D FEM to seamlessly solve complex circuits alongside the company’s other EM tools without ever leaving the MWO or Analog Office design environments. Together, Analyst and MWO are a formidable toolkit that can solve virtually any problem that RF and microwave design engineers are likely to face, and they will do so easier, faster and more accurately than has previously been possible.

MWJ: So now Microwave Office offers (an AWR based) 3D EM simulator within its design flow to compliment its other EM products (planar) and third party solutions (EM socket). How does Analyst compare to AWR’s AXIEM product and how is that tool progressing?

JD: AXIEM is an incredibly powerful tool for 3D planar EM analysis software, and in its latest iteration has speed and accuracy at least three times that of the previous version. It now supports processors with up to eight cores and is a true 64-bit tool, so it can actually solve designs with 1 million unknowns at high speed on a desktop PC. It achieves near-linear performance using its method-of-moments (MoM) engine, and has shown its ability to outperform other 3D planar solutions that have been on the market for more than 10 years. Overall, AXIEM has the potential to allow planar AM analysis to migrate from its former use as a back-end, post-verification tool to a diagnostic solution that can be used throughout the design process.

MWJ: When will Analyst be available?

JD: Analyst has been available for particle accelerator and microwave tube applications for 10 years now. Analyst within the AWR product portfolio is on target to be an integrated part of the Microwave Office design environment within the next calendar year.