On-site Capability Measurement
December 31, 1969 |Estimated reading time: 13 minutes
The use of fine-pitch components, accompanied by the transition to lead-free, will force PCB manufacturers to learn more about their pick-and-place machines’ capabilities. This article describes a method for inexpensive, in-house capability measurements, comparable to those of the IPC-9850 standard.
By Mattias Jonsson
When ultra-fine-pitch ICs and 01005 chip components push pick-and-place machines to their accuracy limits, an SMT process can no longer be accepted or rejected based on a single three-sigma accuracy value. Other factors, such as product complexity and process stability, need to be considered before deciding if a certain process will deliver acceptable production economy. The need for statistical process control (SPC) and capability studies will increase as pick-and-place machines are used closer to specification limits.
The IPC-9850 standard has established methods for comparing surface mount equipment in several aspects; for example, reliability, reject rate and capability. But the need for an expensive Coordinate Measuring Machine (CMM) rules out the possibility of regular, on-site capability measurements for all but the largest companies. IPC-like measurements, however, can be made possible without the need for a CMM because of a modification of the IPC glass plate. The key to affordable capability measurements is the introduction of local fiducials that enable the use of a pick-and-place machine’s built-in vision system as a measurement instrument. It is important to not lose sight of the overall goal of any capability measurement - determine if an SMT process can produce a product at an acceptable quality and cost. Certain knowledge of the theory behind quality control is essential to use these tools correctly. The cost of making an incorrect decision, based on data from a process characterization tool, can be much higher than the cost of implementing such a tool.
Requirements and Defects
Originally, quality was identical to remaining within tolerance limits. If you fall within these limits, you can be satisfied, with no incentive to try to improve the production process. Figure 1 shows target value (T), an upper specification limit (USL) and a lower specification limit (LSL). The Y-axis shows loss suffered due to failure to meet tolerance requirements.
Figure 1. Tolerance limits.
A product that lies just over the LSL will be approved, while one just below the LSL is rejected. It is not completely satisfactory that two almost-identical products are evaluated differently. A more modern approach would be to apply the concept of “loss function”. Loss function indicates the cost of the loss of quality that arises from deviating from the nominal value. What loss function looks like depends upon the individual product, its areas of use and consequences of error. Genichi Taguchi, considered to be key behind Japanese quality success, suggested that loss function be shown as a parabola with its minimum at the nominal value. With loss function, the loss increases with both positioning abnormalities and increased spread (Figure 2). Furthermore, it is profitable to try to direct the process toward the nominal value. Specification limits must be set with care. Setting small tolerances just to be on the safe side may reassure the individual designer, but this drives up production costs.
Figure 2. Taguchi’s loss function.
null
Standardization Figure 3. 0201 chip component with pads.
Choosing the most cost-effective tolerances when assembling electronic components is not easy. There are international standards to lean against. IPC/EIA Y-STD-001C (Requirements for Soldered Electrical & Electronic Assemblies) regulates procedures and requirements for the production of soldered electrical and electronic assemblies. The standard places a number of requirements on the product and process. If a requirement is not fulfilled, then the error is judged to be either acceptable, a process indicator or a defect. For example, an 0201 chip component has a width (W) of 0.3 mm and a length (L) of 0.6 mm. Electrically conducting leads that are soldered to two pads with width P are found at each end of the component. It is essential that the side overhang A is not too large (Figure 3). Section 9.2.6.5 shows the dimension criteria.
The requirement for a maximum-side overhang can be translated to specification limits for the component’s position. If the target value is represented by T and P ≥ W, then the specification limits can be calculated as:
In our case, W = 0.3 mm and P = 100%-140% of W. Assume that P = W = 0.3 mm, and that T = 0. We then have the following specification limits for the placement of the 0201 component:
The capability index of a placement machine cannot be calculated unless these specification limits have been decided.
Variation - A Cause of Losses
There often are many causes for variations in a process. We usually separate causes of variation that can be identified from those that cannot. There is a natural “background noise” that consists of many small and essentially unavoidable causes of variation. These causes are called “chance causes of variation”. Sometimes specific causes of variation occur, such as incorrect material or damaged tools; and these causes are known as “systematic” causes. A process that only has chance causes of variation is said to be in statistical control, or stable. Equipment and production methods work as well as possible, and if you want to reduce variation further, you often need new investments. A process that is in statistical control can be brought out of control by situations such as using incorrect material. Systematic variation often has a few causes, and usually new investments are not needed to correct the error. To control systematic variation, we use SPC. To be able to detect the presence of systematic causes of variation, you must model the process as it appears when only chance causes are present. Depending on what the data look like and the purpose of the analysis, chance causes of variation can be modeled in various ways. Keep in mind, however, that each model is a compromise between being realistic and being manageable.
Models for Random Variation
If X represents the position of the 0201 component in the previous example and all systematic causes of variation are eliminated, the position can be modeled as an average µ plus a random variation ε.
X = µ + ε
In SPC, normal distribution most likely will be used as a model for the variation. Note that not all stochastic variables normally are distributed. If instead we had modeled the number of functioning components on the printed board, then a binomial distribution would have been preferable. The main reason for the usability of the normal distribution is the Central Limit Theorem, stating that the distribution of a sum of many independent stochastic variables tends to be normal, even when the distribution of each individual variable is non-normal. As stated, variation in a production process often is caused by many small, essentially unavoidable causes, and can be modeled using normal distribution. If X is distributed normally, the average value (µ) and standard deviation (σ) can be determined. Figure 4 shows that parameter σ has a direct interpretation. If you take a large random sample, you can expect that an interval of ±σ around the average value would contain about 68% of measured values. Similarly, an interval ±3σ would contain about 99.7% of the measured values.
Figure 4. Normal distribution.
null
Process Control - Eliminate Systematic Variation
If a process is in statistical control and something happens to the process, the problem needs to be discovered and corrected. One way of discovering sudden problems is by using a control chart. This technique is based on selecting random samples at regular intervals and calculating the average value and standard deviation, then plotting each of these in a separate chart. Figure 5 shows two charts, one with the average value (x) and one with the standard deviation (s). A random sample of Size 4 is taken each hour. The average value and standard deviation are calculated and plotted onto each chart. There are two control limits - UCL (Upper Control Limit) and LCL (Lower Control Limit) - and as long as the average value and standard deviation fall between these values, the process is considered to be in statistical control. Note that warning limits are not identical to specification limits; they solely are meant to investigate if the process is in control.
Figure 5. X-S diagram
Even if the process is in statistical control, there is a risk that the control chart will give a warning by chance. One common way to design a control chart is to choose warning limits as average value plus/minus three random-sample standard deviations. Figure 5 shows that if the process is in control, the control chart gives a false alarm with 100 - 99.73% = 0.27% probability. This means that, on average, you can expect a false alarm every 370th hour - if the sample rate is one sample per hour. An alarm indicates that the process should be investigated and any systematic causes of errors eliminated.
Process Capability - Reduce Random Variation
Even if a process is free of systematic errors, and thus in statistical control, does not mean that it can deliver products that fulfill customer requirements. A process in statistical equilibrium does not necessarily fulfill the specifications, and an unstable process can fulfill tolerance requirements. To link SPC with specification limits, we will introduce a concept known as “capability”. Process capability can be defined as a process’ ability to fulfill customer requirements. More specifically, process capability is the ability of a process to fulfill the specifications when there are no systematic errors. These have been avoided already by using SPC. The basic premise for estimating process capability is that the process has to be in statistical control. We will use today’s outcome to predict future process outcomes; therefore, the current process has to be free of systematic errors.
There are a number of different capability indexes that reflect the potential of a process to fulfill specifications. We will review the two most common Cp and Cpk, defined as:
In this case, µ is the average value of the process and σ is its standard deviation. Three conditions must be fulfilled to use these formulae:
- The process must be in statistical equilibrium;
- The process must have a normally distributed outcome;
- The process must have an independent outcome.
Considering these conditions, it is not enough to calculate an index and accept or reject a process. You should use histograms to reveal non-normal distribution, and time plots to detect systematic variations. If the average value of the process is exactly between specification limits and data are normally distributed, these indexes can be interpreted as a measurement of the percentage of defective units (Table 1).
Originally, a capability index of 1.00 or higher was considered acceptable for a capable process. To have a margin to the requirement, it then was increased to 1.33 and later to 2.0. The latter level is popular with six sigma, which derives its name from the denominator in the expression for Cp.
An index of 2.00 means that the process gives one defect per five hundred million manufactured units. The truth is that no process has an outcome that is normally distributed that far out in its distribution tails. Therefore, you must have a reasonable attitude to high-capability requirements, or you might reject a process that is capable. Alternatively, high requirements can be interpreted as (safety) margins to specification limits.
There are many other useful indexes such as Pp, Ppk and ppm. No matter what index you choose, do not lose sight of the purpose of the investigation. The question to be answered is whether the production process is capable or not.
Setting Capability Requirements
Large random samples are necessary to analyze process capability. If a manufacturer requires that Cp ≥ 1.33, this means that if the true process capability is equal to 1.33, the sample will be rejected in half of the examinations. Therefore, you must have a higher true capability to have a margin to the requirement. How large this margin should be depends upon the size of the random sample, measurement system error and costs of making an incorrect decision. When you place requirements on a product consisting of several components, a product’s complexity should influence the requirements for individual components. If the capability requirement for individual component placement is Cp = 20, and the product consists of 500 components, the capability to produce the complete product will be about 1.64. Generally, you should be skeptical about excessively high capability requirements and associated focus on process variation. For processes with a capability over 1.0, normally it is instability and systematic errors that cause losses of quality - not random variation. In this case, high capability can be seen as a safety margin for specification limits. Keep in mind that this may drive up costs unnecessarily.
Measuring Capabilities
In the IPC-9850 capability test method, components with known low variability will be positioned on a well-defined glass plate (Figure 6) by the pick-and-place machine under test. The components’ X, Y and θ coordinates then are measured in a CMM; and the accuracy is calculated as the smallest specification range needed for Cpk = 1.33 and Cpk = 2.0, respectively. Note that for the formulae described in the standard to apply, the data have to be distributed normally and independently. This may not be the case with multi-nozzle machines, so this must be verified with histograms before the data are accepted. The advantage of the IPC-9850 approach is that different machines from different vendors are compared to each other. On the other hand, this requires that measurements be carried out in an external CMM. These measuring machines are expensive and must be placed in climate-controlled rooms. Therefore, even though theoretically solid, the IPC-9850 method cannot be applied easily in a real-production environment. Considering advanced vision systems built into highly capable pick-and-place machines, couldn’t you use this machine to measure accuracy? You can, provided you have a vision system calibrated for optical distortion; and provided you make a few modifications to the IPC-9850 glass plate.
Figure 6. Glass plate for capability measurements.
Because we want to measure the accuracy of the machine’s positioning system, we cannot allow this positioning system to influence the measurement. We want to use the machine’s built-in vision system as an independent measuring device. This is made possible by adding local fiducials to the glass plate, four at each individual component site. The component position then can be measured relative to local fiducials, instead of relative to the positioning system’s encoder signals. Because the component pattern is identical to the one in IPC-9850, a program that applies the same formulae as prescribed by the standard can analyze the raw data. Repeatability is estimated for X, Y and θ; and accuracy specification limits are calculated for Cpk = 1.00, 1.33 and 2.00. This type of on-site capability measurement can be carried out in the same way as specified by the standard, and will give comparable results, but with the advantage of having results ready in a few minutes using equipment you have already. Furthermore, the method can be qualified by measuring the same glass plate individually in a CMM and the pick-and-place machine.
Conclusion
With the introduction of new, advanced fine-pitch components and lead-free processes, accuracy requirements will increase. As pick-and-place machines are pushed to their capability limits, it becomes necessary to gain a deeper understanding of the statistical aspects of the assembly process. Traditional three-sigma value found in data sheets will not be enough to predict whether a piece of equipment can produce to specification - or just produce a lot of rework. Because of a modification of the IPC-9850 glass plate, the pick-and-place machine’s built-in vision system can be used instead of an expensive CMM. Manufacturers who begin to gain knowledge about SPC and capability measurements, and develop in-house methods for characterizing their processes, will be one step ahead of the competition when lead-free, ultra-fine-pitch components become commonplace.
Mattias Jonsson, software and vision systems product manager, MYDATA Automation AB, may be contacted at +46 8 475 55 00; e-mail: mattias.jonsson@mydata.se.