Monday, June 17, 2013

Quality Function Deployment

     

Quality Function Deployment 

QFD is a structured process that provides a means for identifying and carrying the customer’s voice through each stage of product development and implementation. QFD is achieved by cross-functional teams that collect, interpret, document, and prioritize customer requirements to identify bottlenecks and breakthrough opportunities.

QFD is a market-driven design and development process resulting in products and services that meet or exceed customer needs and expectations. It is achieved by hearing the voice of the customer, directly
stated in their own words, as well as analyzing the competitive position of the company’s products and services. Usually, a QFD team is formed, consisting of marketing, design, and manufacturing engineers,
to help in designing new products, using customer inputs and current product capabilities as well as  competitive analysis of the marketplace. QFD can be used alternately for new product design as well as focusing the efforts of the QFD team on improving existing products and processes. QFD combines tools from many traditional disciplines, including engineering, management, and marketing.








Thursday, June 6, 2013

Measurement system Analysis



Measurement System Analysis
Measurement data is used to take a decision “to adjust a manufacturing process or not ” or  “ to accept a component or not”.
• Gauges are used on the shop floor to take such a decision.
• To ensure that the decision taken is correct, gauge being used should be both  accurate and consistent.
Bias 
Bias is the difference between the true values (reference value) and the observed average of measurements on
the same characteristics on the same part. Bias is the measure of the systematic error of the measurement system. It is the combined effects of all sources of variation known or unknown.
Probable causes for excessive bias are
Instruments needs calibration,Worn instrument, fixture, Improper calibration,Poor quality instrument, Linearity error,Wrong gauge for application,Environment (temperature, humidity, vibration, cleanliness 

Repeatability 

Repeatability is known as “within appraiser’ variation, Repeatability is the variation in measurements obtained with one measurement instrument when used several  times by one appraiser while measuring the identical characteristic on the same part .This is the inherent variation or capability of the equipment itself
Repeatability is “within system” variation when the condition of measurements are fixed and defined.  (fixed part, instrument, standard, method, operator, environment etc. )
Probable causes for poor Repeatability:
Within part            : Form, position, surface finish, taper
Within instrument  : Repair, wear, equipment or fixture failure, poor quality maintenance
Within standard    : Quality, wear, class
Within method       : Variation in set up, technique, zeroing, holding, clamping
Within appraiser    : Technique, position, lack of experience, feel, fatigue
Within environment: Temperature, humidity, vibration, lighting

Reproducibility

Reproducibility is known as “between appraisers” variation, Reproducibility is the variation in the average of measurements made by  different appraisers using the same measuring instrument when   measuring identical characteristic on the same part.
Probable causes for poor Reproducibility

Between appraisers
Average difference between appraisers A, B, C caused by training,technique, skill and experience.Instrument design, or method lacks robustness operator training effectiveness.

Gauge R & R or GRR
It is an estimate of combined variation of Repeatability and Reproducibility.GRR is the variance equal to the sum of within system andbetween system variances.
The Foundation of everything in Quality is measurement. so we measure for two primary reasons one is to make a decision. second is for  process improvement. As known generally there are two data attribute and variables  

• Attribute Data:  Categorical, named only, arbitrary scales, also known as Discrete Data ( kappa statistic is relevant ) 

May be required 50 parts two to three appraisers each three times inspection , the outcome of result in the form of  NG/P
P - Positive outcome by operator ; NG - Negative outcome by operator 
compare between the appraisers % of good and bad parts.

K= (po- pe)/(1-pe)
Kappa cross tab method between operators if the 


Values < 0.40 indicate poor agreement between appraisers
Values > 0.75 indicate good to excellent agreement ( max = 1 )


• Continuous Data:  Allows for infinitely finer sub-divisions,also known as Variables Data ( Kandall's statistic is relevant )

MSA factors Impacting Variation
• Gage
• Appraiser
• Method
• Product 
• Environment


%GRR = stdev(gauge)/stdev(total) -- In terms of round numbers, the %GRR guidelines are generally the same as the PTR guidelines. BTW, a %GRR of 30% is the same as saying that the measurement system variance is 9% of the total variance (in other words, less than 10%).
Note that if the part-to-part variation increases, %GRR goes down. This does not mean you should ask your friends in the fab to increase part-to-part variability. Ratios are just that – ratios. If your part-to-part variability is extremely low than your %GRR doesn’t compare directly with someone else’s %GRR where there is considerable part-to-part variability. If you're going to do a gauge r&r study, don't just pick two or three parts. You're either going to underestimate part variability or over estimate it, neither of which is helpful.
Also note that if you use 6 as your sigma multiplier for PTR, then %GRR divided by PTR (approximately) equals Cp.
Again, use your data and experience to determine how the %GRR metric can help you decide whether your measurement system is capable.
NDC = square-root[2*variance(process)/variance(gauge)] -- The number of distinct categories derives from another gauge metric, the discrimination ratio. Technically, the ndc can be interpreted as the number of non-overlapping confidence intervals that cover the range of the product variation. (Less technically, ndc can be interpreted as “never don’t concentrate” if you’re a Simpson’s fan.)
More practically, you can view the ndc as the number of distinct categories that the measurement system “sees” within a given parameter. Relatively large amounts of measurement error mean that two parts that are truly quite different from each other may look very similar to each other when measured. Relatively small amounts of measurement error mean that the measurement system can differentiate between two parts that are similar but not identical to each other.
The usual ndc guidelines state that ndc should be 5 or more, and that values less than 2 suggest a non-capable measurement system. An ndc of 5 is actually equivalent to a %GRR of around 27.1%, so the ndc and %GRR guidelines are not consistent with each other. See Some Relationships Between Gage R&R Criteria by William H. Woodall and Connie M. Borror in Quality and Reliability Engineering International (2008; 24:99-106) for more information.
Use your data and experience to determine if the ndc metric can help you measure and improve your measurement system.
Remember that dataConductor’s gauge r&r results can be easily filtered and sorted, and in combination with other statistics you can quickly spot unusual results. It's easy to drop in a line plot or build a scatterplot to compare appraisers. Sorting the min/mean/max plot from low to high in the default gauge r&r output is a great way to spot whether variability changes as the absolute measurement changes.
Remember too that gauge metrics are there to help you improve your measurement system, but the focus should be on the substance of the metrics and not just the repetition of their use.