Smart for Smart’s Sake, Part 1

Reading time ( words)

In this so-called digital revolution in electronics manufacturing, we seem to be inching forward. This slow progress may be because the electronics manufacturing industry has special needs, being somewhat more complex than other areas of manufacturing. Previous generations of software systems applied to electronics manufacturing achieved limited benefits while introducing additional costs, so it is understandable that seasoned management are sceptical about this new breed of Smart Factory or solutions related to Industry 4.0.

Let’s take the lid off this shop-floor digitization issue once and for all, to determine how what we do today, for example, with Open Manufacturing Language (OML), will be different from past challenges that caused people to move forward cautiously.

Previously in this column, we discussed the various historical methods of collecting data from shop-floor processes, and how it compares to the fully normalized approach of OML, where data from any machine operation can be expressed in a single interoperable language. Let’s progress now to the next layer of activity—where the data collected is to be used. In this first part of the Smart for Smart’s Sake series, we consider the most basic of uses for the data: asset utilization and productivity.

Once a reliable flow of information from all the various processes on the shop-floor is established with OML, the natural inclination then is to store all of the information into a huge database, so that anyone can use it for whatever purpose they like. Nowadays, thoughts go immediately to the cloud, which we imagine is like a huge data repository that the likes of Google would use to find information on whatever you are searching for. Unfortunately, it is not quite like that.

Sending data to the cloud, through ERP, MES, or some sort of middleware, seems like an ideal IT solution. However, we are talking about a lot of data. At each process, for each PCB or assembly, a range of data is collected, most of which is only available in real-time, such as:

  • Arrival of the unique product at the process
  • Start of the production cycle
  • Completion of the production cycle
  • Product leaves the process
  • Operational warnings such as material pick-up errors
  • Messages that describe various reasons that the machine might stop, such as material exhaust
  • Other process exceptions
  • Verification of each material
  • Unique feeder information
  • Traceability information, which can be a list of all reference designators and the code of the exact material ID that was used
  • Image information including material pickup and placement, leads, etc.
  • Machine usage statistics

Beyond SMT, the messages can get more complex, for example:

  • Operational result information (pass or fail)
  • Electronics repair ticket
  • Detailed test results or operational measurements
  • Operation guidance step increments and confirmations
  • Diagnosis and repair information
  • Routing confirmation

For a single operational production flow, end-to-end, many messages are generated each second. Some of the messages contain many kilobytes of data. Multiply that by the total number of production flows in the factory and, now we are wanting to store more and more data in the cloud, second by second, month after month. What makes electronics yet more of a challenge compared to other industries is the sheer size and complexity of the bill of materials and the number and diversity of the various production processes.

The danger of taking all of the data from all of the processes and simply stuffing it into a cloud is that it will make that cloud “heavy.” Suddenly, impressions of the big fluffy white masses in the sky come a lot closer to the ground, and they look menacingly dark. Standard data analytical tools make heavy work of looking through complex data to generate reports, based on time, processes, materials, or any of the dozens of key metrics. Generating near-real-time graphs, charts, or dashboards of live production information is a serious challenge.

The good news is that the latest generation of business intelligence or data analytics software is able to cope with immense volumes of information. However, the issue is that we are putting raw data into the cloud. Even where this data is fully normalized into a single language like OML, the process of reporting is an order of magnitude more complex than simply going through the data and adding up the numbers in different ways.

For example, consider a fairly standard SMT machine, labeled "Z". After working for some time, Z completes the placement process, and the current PCB leaves the machine. It then looks to start the next, but no PCB has arrived. An event or status message is sent into the machine log and out to external systems, such as "Stopped. Waiting for PCB." Z has limited visibility outside of itself. What happens inside the machine can be reported, but any external causes of issues can only be represented by the symptoms.

For machine-based reports, around 80% of the information is just symptom, without a known reason or cause. Smart computerization, on the other hand, can take the "Waiting for PCB" message from machine "Z" and start the process to discover the reason behind the event. The Smart computerization knows the flow of the current production work-order or job, so the process immediately before "Z" can be identified, which may simply be a connected machine upstream in the line, or it could be a completely different machine process or logistics operation.

Using a common platform for the information such as OML, what is going is much easier to identify. For example, machine "X" earlier in the line had stopped, which starved the delivery of PCBs to subsequent machines "Y" and "Z". Working up the line, the computerization will find the source of the issue. Perhaps the earlier machine "X" stopped because of a material pickup error. The subsequent stop events downstream that happened as a result of this machine stopping for the material pickup error can now be assigned responsibility for the correctly assigned stop-time.

To read this entire article, which appeared in the September 2016 issue of SMT Magazine, click here.



Suggested Items

Upgrading to a Digital Line

01/15/2020 | Nolan Johnson, I-Connect007
Ridhi Kantelal of Arch Systems breaks down what fabricators considering bringing in digitalization and upgrading their lines should know, and why they may see the most benefit from focusing on the data engineering aspect rather than the actual retrofitting of their systems.

Mirtec's Approach to Raw Data: The 'Sushi Principle'

12/09/2019 | Real Time with...productronica
Editor Pete Starkey and Brian D’Amico discuss the company’s new Alpha system, designed for the automotive market. D’Amico explains why Mirtec is focusing on providing customers with raw data instead of “cooked” data, what he terms the “sushi principle.” He says that the system is able to pick up tiny defects that would otherwise be filtered away if the system were not using raw data. D’Amico also discusses their use of artificial intelligence, and some of the possible benefits from using AI going forward.

Explaining the QSFP-DD Data Center Interconnect Standard

11/05/2019 | Scott Sommers, Molex
According to Cisco’s report, most IP traffic either originates or terminates in data centers. Yet as massive as the flows between centers are becoming, the data managed within those same facilities is going even higher. All of this activity means that “hyperscale” is the word of the future. Roughly one-quarter of all servers installed in 2016 went to hyperscale facilities, but that number will grow to almost one-half by next year.

Copyright © 2020 I-Connect007. All rights reserved.