Wednesday, September 4, 2013

Results: Monte Carlo Simulation of Acceptance Sampling Plan


I have been meaning to do this for personal reference so it feels good to finally have the results. I'm posting it here for everyone's reference and for my future me since I'm sure I would misplace the file later so to make sure I could at least search it on the internet, here it goes.
------------------------------------------------------

Monte Carlo Simulation of Acceptance Sampling Plan

Background:
Sometimes people get confused why their acceptance sampling plan somehow did not capture a defective item and was shipped to the customer. Here is my take on that matter.

Objective:
To examine using Monte Carlo simulation approach the performance of attribute acceptance sampling plan given batch size of 3200 items, sample size of 125 (MIL-STD-105E: General Inspection Level 2, AQL = 0.1).

Methods:
1. Random numbers were generated using SAS's JMP random number utility. 1,000 columns each having 3200 rows were generated containg a random events pass/fail that follows a Bernoulli distribution with parameter P. This generated table would represent the Population Data where each column representing a distinct period in time.

2. From the generated table above, a random sample of 125 rows will be taken. Each row will be then summarized as either containing a "fail" entry or not. For those columns where "fail" were sampled, a judgment of "batch failed" will be given. For those columns where "fail" were not sampled, a judgement of "batch passed" will be given.

3. Of the interest of this simulation is the examine the response of percent batch_failure as a function of the Bernoulli distribution parameter P.

4. The simulation would be run for the following Bernoulli distribution parameter P = { 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05, 0.1}. For each value of P, three runs would be made to be able to have a feel on the standard deviation of the results.

Results:
Here are the results of the Simulation
Screen Effectiveness Results
(Percent Trapped* by the Sampling Plan given True Defective Level P):

Tuesday, June 25, 2013

Sample Size Requirement is too big!

Someone asked me recently about sample size computations. Based on my experience this is one of the most difficult questions to answer because much of the approach depends on the context of the statistical tool that is meant to be used.
This question however is something that I  heard for the first time which can be re-phrased like this:
" I have computed the required sample size using Minitab however the resulting samples that I need to collect is much much more than the size of  the population under study? What would I do and what sample size would I take?"

There are several statistical theories to answer that, but being a practical person I lean towards the most simple to explain and the most simple to remember and works most of the time. This rule of thumb is known as Cochran’s (1977) correction factor. Basically the rule is:

If sample size is > than 3% of the population then use correction factor, otherwise use usual sample size computations.

The correction factor is a multiplier that is used to align the sample size to the population's size. It has the formula

C.F. = Initial_Sample_Size/(1 + Initial_Sample_Size/Population_Size)

Therefore the final sample size that an analyst should use whenever there is a limited population size is:

Final Sample  Size =  Initial_Sample_Size *Initial_Sample_Size/(1 + Initial_Sample_Size/Population_Size)
 
Custom Search