When I-Connect007 asked me to contribute to this issue on field solvers, I wondered what more could be added to this subject. But as a supplier and developer of field solvers, Polar is still asked some of the same questions—both by experienced customers who are perhaps exposed to a new scenario and, as is most welcome, by new entrants to the industry.
I will start by saying that all field solvers are accurate; they solve Maxwell’s equations by one or another of the available mathematical methods. When all are fed with the same data, all should generate very similar results, and any differences observed will be orders of magnitude less than the variations in the PCB transmission lines caused by the composite nature of PCB substrate dielectrics and the variations of the plating and etching processes.
However, field solver is a very generic term for a range of tools in this application for predicting the behavior of PCB transmission lines. It is important to remember that some areas of field solvers are not actually field solving. Calculating the loss owing to surface roughness of copper is a prime example of this. The surface nature of plated copper is so complex that full field solving would be impractical, so most commercial “solvers” will overlay the core field-solving function with empirical techniques: Hammerstad, Groisse, Huray, and Cannonball-Huray, to name just a few. These empirical extensions extend the capability of the field solver into modelling parameters that are:
- Vital to model insertion loss
- Impractical to field solve given the complexity of the surface profile and the available compute power in the hands of even the best equipped SI engineers
Feeding the solver with the correct dimensions is vitally important, as no tool will give an accurate result if fed with incorrect start parameters. Customers frequently ask whether the transmission line structure height should include the trace thickness in the total height. This is easy to answer if you are working “backward from a microsection” but if you are imagining the finished PCB from a simulation, then it’s less obvious.
My question in Figure 1 is set as a puzzle as it is one of the most frequently asked items regarding transmission line modelling. The H1 dimension represents the core thickness, and the H2 dimension is that from the top of the core to the foil. You can see H2 is denoting prepreg as the trapezoid is pressed into the Er2 region. By how much? Well, if you work from a microsection, the dimensions will be clear to see. But when modelling with a solver before the board is built, you must make a prediction as to how much the T1 will impact the H2 dimension.
Think about it: If there is a lot of copper on the signal layer, most of the T1 will need to be added to the pressed thickness of the prepreg; if the routing density is low, then far less. This is where a stackup tool comes in handy, as a good stackup tool will virtually press the prepregs and consider the Cu density on the signal layer to calculate the optimum value for H2 to feed into the solver. This is the point I was stressing earlier: Commercial PCB transmission line field solvers must possess a variety of tools and capabilities over and above the core solver engine to feed it with good mechanical data to solve.
As you look at Figure 2, consider the question posed by Figure 1. By using Speedstack to pre-process the material data, the pressed height of H2 and the impact of T1 have been calculated to feed into the solver engine. The left-hand side of the image shows the raw prepreg thicknesses. The signal trace (shown in blue) has a 5-mil core below and two sheets of 3-mil prepreg above. Another core is placed above the two prepregs at the top extent of the highlighted area. Now look at the structure on the right-hand side of the picture. This is an offset stripline located with the signal on the blue layer on the LHS.
As you would expect, H1 = 5 mil—the lower core thickness—but H2 does not equal 6 mil (3 mil plus 3 mil prepreg). H is calculated as 6.28 mils, the combined thickness of the structure when the two sheets of prepreg are pressed into the copper distributed on the signal layer. Taking care of this type of pre-processing is the key to obtaining accurate predictions from your solver.
Viewing the stack in Figure 2 as a schematic view with fixed size layers for materials is ideal from a planning perspective, but once the stack is complete and pressed, a proportional view makes it plain to see if you have added incorrect materials in error—in the blink of an eye.
I have used a small sample of the techniques you need to deploy to ensure that a field solver engine is fed with accurate data. Quality material data from the supplier is also key in this process, but the key takeaway is that, when looking at the appropriate field solver for your requirements, you must always remember that all solvers are accurate; it is the pre-processing of the data being fed to the solver which unlocks its full potential. This is true for lossless lines up to around 2GHz on through to ultra high-speed lines where insertion loss needs serious consideration too. You should also take care that measurement data is validated, but that is the topic of another article—already partially covered in my April 2022 column in Design007 Magazine, “Using Touchstone Files to Build Measurement Confidence.”
This column originally appeared in the July 2022 issue of Design007 Magazine.
Additional content from Polar: