Characteristics of Process Control
Statistical process control is the method which will control the behavior of the process through the use of statistical information so that the process output will meet the target with minimal variability. To maintain statistical control it is important to remove all the special causes of variation from processing. To ensure statistical control and maintain high quality processing of the micro-chips for the keychain fob cards, for this example this would suggest that the special cause variables need to be eliminated from the manufacturing process.
“Special causes 5-15% of quality problems, due to a factor that has ‘slipped’ into the process causing unstable or unpredictable variation. They are unpredictable variations that are abnormal to the process including human error, equipment failure, defective/changed raw materials, acid spills, power failures, etc.; failure to remove them can result is corrosion, scale, metal fatigue, lower equipment efficiency, increased maintenance costs, unsafe working conditions, wasted chemicals, increased down-time (plant shut-down...), etc. Removal of all special causes of variation yields a process that is in statistical control and are correctable by local personnel”
Diagnostic Charting and Graphing
There are a variety of diagnostic techniques available to investigate quality problems. Two of the more prominent of these techniques are Pareto charts and fishbone diagrams. Pareto Charts use analysis as a technique for tallying the number and type of defects that happen within a product or service. (Lind, 2002, pp.626-627) It is often called the 80–20 rule, is that 80 percent of the activity is caused by 20 percent of the factors. By concentrating on 20 percent of the factors, managers can attack 80 percent of the problem. To develop a Pareto chart there are several steps that take place:
- Tally the type of defects.
- Rank the defects in terms of frequency of occurrence from largest to smallest.
- Produce a vertical bar chart, with the height of the bars corresponding to the frequency of each defect.
The other prominent diagnostic chart is a cause-and-effect diagram or a fishbone diagram. It is called a cause-and-effect diagram to emphasize the relationship between an effect and a set of possible causes that produce the particular effect. The diagram is useful to help organize ideas and to identify relationships. It also allows for open ideas from others. This would work well with defining any issues within the manufacturing plant for the smart chips to determine any issue that may arise, the causes and then the downstream impacts or effects in the business
The process for designing a fishbone diagram is to first draw the fishbone diagram. The effects are usually a particular problem, or perhaps a goal, and it is shown on the right hand side of the diagram. The major causes are listed on the left-hand side of the diagram. The usual approach to a fishbone diagram is to consider four problem areas, namely, methods, materials, equipment, and personnel. The problem, or the effect, is the head of the fish.
Sample of a Fishbone Diagram
Types of Variables
“It is useful to distinguish between two broad types of variables: qualitative and quantitative (or numeric). Each is broken down into two sub-types: qualitative data can be ordinal or nominal, and numeric data can be discrete (often, integer) or continuous. Since qualitative data always have a limited number of alternative values, such variables are also described as discrete. All qualitative data are discrete, while some numeric data are discrete and some are continuous. For statistical analysis, qualitative data can be converted into discrete numeric data by simply counting the different values that appear.”
For qualitative data the observations can be placed in to separate distinct categories. Some examples would be hair color such as blonde, auburn, or brunette. Such data are naturally isolated, in that there are finite quantities of categories into which the observed data can be placed. “The data is further classified as either nominal if there is no natural order between the categories (e.g. eye color), or ordinal if an ordering exists (e.g. exam results, socio-economic status).”
Quantitative or numerical data occurs when the observations are counts or measurements. “To further define the data it can be discrete if the measurements are integers (e.g. number of people in a household, number of cigarettes smoked per day) or identified as continuous if the measurements can take on any value, usually within some range (e.g. weight).”
In-control/Out-of-control Processes and Range Possibilities
A process in said to be in control as previously noted when special causes are eliminated and only common causes are present. As long as all the data is indicated to be within the control limits and nothing unusual is present then this would be an indication that the process in control. The center line is a good measure of the process average.
Conversely a process would be considered out of control if any special causes are present. There are reasons that the process can go out of control such as there are changes that have occurred in the process. There are several phenomena that define what is considered out of control. The phenomena include:
- Sudden, short-lived change in the process
- Signaled by points outside the control limits
- Sudden change in the process
- Signaled by 2 out of 3 points in the warning zone
- Something new has been introduced in the process
- Signaled by 7 points in a row on one side of the center line
- Gradual change in the process
- Signaled by 6 points in a row going either up or down, or a clear trend either up or down
Sample of Freak Pattern Control Chart
Confidence That the Process is in Control
Confidence intervals are usually created as part of a statistical test of hypotheses. The hypothesis test is designed to help us make an inference about the true population value at a desired level of confidence. Confidence limits may be determined so that the interval between these limits will cover a population parameter with a certain confidence, that is, a certain proportion of the time. These intervals are called tolerance intervals, and the end points of such intervals are called tolerance limits.
Confidence intervals are the success rate of the research results. In other words, if the confidence level is 95%, then 95 times out of 100 the population percentage, if it could be measured, would be within the confidence interval, and 5 times out of 100 the population percentage would be outside the confidence interval.
Problem Statement and Hypothesis Testing
Hypothesis testing could be used as a foundation in research projects to determine which path to follow. For example for research you want to determine if converting to a small smart chip oriented keychain fob credit card or continuing with the traditional type of credit card. Researchers are looking for results such as a difference between means by using a test such as ANOVA or even a t-test; a difference between proportions that can be tested using chi-square comparison testing or a relationship either as with a correlation or possibly multiple regressions.
One way to consider this would be to go to the other technology groups with the concept of putting a credit card smart chip within a keychain size fob. This is like looking for a statistical difference. There are at least four outcomes for that could come from the hypothesis itself. Two are correct and two are wrong. The results may find that this process being so minutely small will not work as simply or one may find that this does work and is not that complex. Independent of this, the technology groups may say this will work easily with a high cost or the technology groups may say this will work easily with minimal costs.
Statistical inference generally involves four main steps.
- Formulating a hypothesis about the population or "state of nature,"
- Collecting a sample of observations from the population,
- Calculating statistics based on the sample,
- Either accepting or rejecting the hypothesis based on a pre-determined acceptance criterion.
There are two types of error associated with statistical inference Type I error (α error) which is the probability that a hypothesis that is actually true will be rejected. The value of α (alpha) is known as the significance level of the test. The other type is Type II error (ß error) which is the probability that a hypothesis that is actually false will be accepted.
Statistical Concepts
To apply some of the concepts acquired within the Statistics for Managerial Decisions course, a survey of those in my department was given regarding usage of the new ExpressPay keychain fob to determine usage. With the quantity of times used the data was gained was calculated following data using the MegaStat program within Excel. The results are as follows:
References:
4GM Consulting. An introduction to understanding variation. Retrieved May 8, 2004 from:
Bierwirth, J. and Gabele, T. (2001, November 30). Common SPC terms. Retrieved May 4, 2004 from:
Davis, D. (2004, February 22). The rush to contactless payments. Retrieved May 12, 2004 from:
Lind and Mason. (2003). Business and research statistics (11th Ed.). New York: McGraw-Hill.
Pyzdek, T. (2004) Statistical inference. Retrieved May 15, 2004 from: inc.com/knowledgecente/articles/CQE_IIICb.html
Stanton, C. Logic of Hypothesis Testing. Retrieved May 11, 2004 from:
Unknown Author. Types of variables. Retrieved May 14, 2004 from: