Start with the FMEA, Not the Camera

A control plan for in-line inspection should be derived from the FMEA, not constructed from what a camera can physically see. The FMEA identifies failure modes, their severity ratings, and the current detection controls. In-line inspection should be added as a detection control for failure modes with high severity and currently inadequate detection - not for every possible defect type regardless of customer impact.

This sounds obvious but is frequently reversed in practice. The typical deployment sequence is: install camera, calibrate to detect surface defects, tune thresholds to minimize false positives, write up what the system actually does. The FMEA is updated after the fact to reflect the inspection capability that was installed. This produces an inspection system that detects what it can detect rather than one designed to detect what matters.

Starting from the FMEA means ranking defect types by Risk Priority Number (RPN = Severity x Occurrence x Detection). High-RPN failure modes where current detection is inadequate are the primary candidates for in-line inspection. If surface cracks on a fatigue-critical surface have RPN 400 and gas porosity in a non-structural section has RPN 120, the control plan should prioritize crack detection configuration over porosity detection configuration - even if porosity is easier to detect with the available hardware.

Defining Inspection Features and Their Acceptance Criteria

For each failure mode included in the in-line inspection scope, the control plan needs to define the specific features to be inspected, the acceptance criterion for each, and the measurement method. This is analogous to the inspection instructions section of a standard incoming quality inspection plan, but calibrated for vision inspection capabilities.

Acceptance criteria need to be stated in terms measurable by the vision system. "No cracks" is not an adequate criterion for a vision system that measures crack width and length in pixels mapped to millimeters. The criterion should be: "No surface cracks exceeding 0.15mm width or 2.0mm length, measured at the inspection station under grazing illumination at 12-degree incidence angle." This is unambiguous, measurable, and calibratable.

Flash acceptance criteria should reference the customer drawing or customer-specific requirements (CSRs), not just what the die normally produces. If the customer drawing callout is "no flash exceeding 0.5mm at parting line on datum faces A and B," that is the criterion. If the current die normally produces 0.3mm flash and the threshold is set at 0.4mm, there is headroom before customer rejects. If flash climbs to 0.45mm without triggering an alert, as discussed in our article on die wear detection, a die maintenance trigger should fire before customer specification is approached.

Setting Rejection Thresholds: The False Positive Trade-Off

Setting inspection system rejection thresholds involves a trade-off between two error types: false rejections (good parts flagged as defective) and false passes (defective parts cleared as good). These two error rates move in opposite directions as the detection threshold changes. Tightening the threshold reduces false passes but increases false rejections. Loosening it reduces false rejections but increases false passes.

The appropriate operating point depends on the consequence of each error type for your specific application. For automotive safety parts with critical characteristics - seat belt anchors, steering knuckles, brake caliper brackets - the consequence of a false pass (defective part reaching the vehicle) is severe. The consequence of a false rejection (good part discarded) is a production cost. At these severity levels, accepting a higher false rejection rate to achieve a lower false pass rate is the correct engineering decision.

For non-structural components where customer-defined acceptance criteria have wider tolerances, false rejection cost becomes more significant relative to false pass consequence. An excessive false rejection rate in low-stakes applications erodes operator confidence in the system - if operators learn to override or bypass alerts because they see too many good parts flagged, the system fails regardless of its technical detection capability.

Document the threshold rationale in the control plan. State what false rejection rate is acceptable for each defect type at the chosen threshold, and how this was determined. This creates an audit trail that demonstrates the threshold was set by engineering judgment against defined criteria, not arbitrarily adjusted to minimize operator complaints.

Connecting Inspection Results to Your MES and SPC System

An in-line inspection system that stores data locally without feeding it into the quality management system (QMS) or MES creates an information island. Quality trends visible in inspection data will not appear in the production floor dashboards that operations and quality management use to make decisions. Corrective action records will not reference inspection data. PPAP submissions for new models will not benefit from ongoing production inspection data as evidence of process capability.

The integration requirements for inspection-to-MES connection need to be defined in the control plan. At minimum: inspection results (pass/fail and defect type for failures) should be written to the MES part record in real time. Defect rate by type should feed a control chart in the SPC system updated at minimum hourly. Exception reporting (defect rate exceeding control limits) should trigger a notification to the quality engineer responsible for the part family.

OPC-UA provides the communication path for inspection result output, as discussed in our article on OPC-UA integration in foundry environments. ForgePuls writes inspection results to the MES using OPC-UA or REST API depending on the MES vendor's preferred integration method. The data schema - part serial number, timestamp, inspection station, defect type, defect location, accept/reject decision - needs to be agreed with the MES vendor before deployment to avoid data structure mismatches during integration testing.

The Response Plan: What Happens When the Control Chart Signals

A control plan without a defined response plan is incomplete. The control chart will generate signals. The control plan must specify what actions those signals trigger, who is responsible for executing those actions, and within what timeframe.

For Western Electric rules applied to a p-chart (defective fraction) for a specific defect type, a typical response plan includes:

A single point outside the upper control limit (UCL): Quality engineer reviews the last 10 flagged parts within 30 minutes. If confirmed defect, production stops for root cause investigation before resuming. If false alarm (systematic vision system issue), adjust threshold and resume, document reason for override.

Eight consecutive points above the centerline: Production engineering review of process variable trends within 2 hours. Identify process variable that has drifted and implement corrective adjustment. Document in corrective action log.

A trend of six consecutive increasing points: Proactive review within 4 hours. Likely early die wear or process drift. Schedule preventive maintenance or process parameter review at next planned stop.

These response time commitments need to be realistic for your staffing model. A response plan that requires a quality engineer on-site within 30 minutes works in a three-shift operation with dedicated quality staff. It does not work in a lean operation where the quality engineer covers multiple cells across the plant. The control plan should reflect actual available response capability, and staffing should be aligned to the response requirements the control plan establishes.

How to Validate the Control Plan Before Full Production

Before releasing an in-line inspection control plan for production use, validate it against a set of known-defective and known-good parts. The validation tests the detection capability (does the system catch defects it is supposed to catch?) and the false rejection rate (does it flag good parts at the expected rate?). Both are required for the control plan to support a PPAP submission or a 8D corrective action response.

The validation parts set should include: confirmed good parts representing the range of normal surface variation (minimum 100), confirmed defective parts for each defect type in the inspection scope (minimum 20 per defect type), and borderline parts at or near the acceptance limit (minimum 10 per type). The validation results - detection rate and false rejection rate against this set - become part of the measurement system analysis (MSA) documentation for the inspection station.

IATF 16949 requires that inspection stations used for product acceptance have documented MSA results. A visual inspection station - whether human or vision system - needs to demonstrate repeatable, reproducible results. The control plan for an automated vision inspection station should reference the MSA results that demonstrate its capability to make consistent accept/reject decisions.

ForgePuls includes control plan templates and MSA documentation support: Contact us

← Back to Blog