Identify three tasks eg Use of ToolMethod that are typically

Identify three tasks (e.g., Use of Tool/Method) that are typically done during the Measure Phase of DMAIC and briefly summarize the purpose for performing each of these tasks. Process map (value stream map or other) Process stability (SPC charts) Process capability (yield, DPM, DPMO) Measurement system capability (Gage RR) Normality Test/Distribution ID Other data patterns Any sort of \"Cause and Effect\" diagram used to determine squareroot causes should be in Analyze - not Measure.

Solution

Value Stream Mapping is a method of visually mapping a product\'s production path (materials and information) from \"door to door\". VSM can serve as a starting point to help management, engineers, production associates, schedulers, suppliers, and customers recognize waste and identify its causes. The process includes physically mapping your \"current state\" while also focusing on where you want to be, or your \"future state\" blueprint, which can serve as the foundation for other Lean improvement strategies.

A value stream is all the actions (both value added and non-value added) currently required to bring a product through the main flows essential to every product:

·         the production flow from raw material into the arms of the customer

·         the design flow from concept to launch

Taking a value stream perspective means working on the big picture, not just individual processes, and improving the whole, not just optimizing the parts.

Within the production flow, the movement of material through the factory is the flow that usually comes to mind. But there is another flow - of information - that tells each process what to make or do next. You must map both of these flows.

Value Stream Mapping can be a communication tool, a business planning tool, and a tool to manage your change process. The first step is drawing the current state, which is done by gathering information on the shop floor. This provides the information needed to map a future state. The final step is to prepare and begin actively using an implementation plan that describes, on one page, how you plan to achieve the future state.

Based on experience with many types of process data, and supported by the laws of statistics and probability, Dr. Shewhart devised control charts used to plot data over time and identify both Common Cause variation and Special Cause variation.

Process Variability

If you have reviewed the discussion of frequency distributions in the Histogram module, you will recall that many histograms will approximate a Normal Distribution, as shown below (please note that control charts do not require normally distributed data in order to work - they will work with any process distribution - we use a normal distribution in this example for ease of representation):

In order to work with any distribution, it is important to have a measure of the data dispersion, or spread. This can be expressed by the range (highest less lowest), but is better captured by the standard deviation (sigma). The standard deviation can be easily calculated from a group of numbers using many calculators, or a spreadsheet or statistics program.

Initiate Data Collection and SPC Charting

Develop a sampling plan to collect data (subgroups) in a random fashion at a determined frequency. Be sure to train the data collectors in proper measurement and charting techniques. Establish subgroups following a rational subgrouping strategy so that process variation is capturedBETWEEN subgroups rather than WITHIN subgroups. If process variation (e.g. from two different shifts) is captured within one subgroup, the resulting control limits will be wider, and the chart will be insensitive to process shifts.

The type of chart used will be dependent upon the type of data collected as well as the subgroup size, as shown by the table below. A bar, or line, above a letter denotes the average value for that subgroup. Likewise, a double bar denotes an average of averages.

Consider the example of two subgroups, each with 5 observations. The first subgroup\'s values are: 3,4,5,4,4 - yielding a subgroup average of 4 (x1). The second subgroup has the following values: 5,4,5,6,5 - yielding an average of 5 (x2). The average of the two subgroup averages is (4 + 5)/2 = 4.5, which is called X double-bar (x), because it is the average of the averages.

You can see examples of charts in Section 9 on Control Limits.

When an out-of-control condition occurs, the points should be circled on the chart, and the reaction plan should be followed.

·        POCESS CAPABILITY

+1 S.D, 68% of population +2 S.D, 95% of population+3 S.D, 99.7% of population

At 3-sigma away from process mean expect 99.73% of observations fall within these limits. At +2 expect 95.46%

The ability of a process to meet product design/technical specifications Conducted only when the process is normally distributed

HOW TO MEASURE CP

·        MEASUREMENT SYSTEM CAPABILITY (GAGE RR)

(Gage Repeatability and Reproducibility) is the amount of measurement variation introduced by a measurement system, which consists of the measuring instrument itself and the individuals using the instrument. A Gage R&R study is a critical step in manufacturing Six Sigma projects, and it quantifies three things

The overall Gage R&R is normally expressed as a percentage of the tolerance for the CTQ being studied, and a value of 20% Gage R&R or less is considered acceptable in most cases. Example: for a 4.20mm to 4.22mm specification (0.02 total tolerance) on a shaft diameter, an acceptable Gage R&R value would be 20 percent of 0.02mm (0.004mm) or less.

·        NORMALITY TEST/DISTRIBUTION ID

There are three types of normality tests:-

·         D\'Agostino-Pearson omnibus test. We recommend using this test. It first computes the skewnessand kurtosis to quantify how far from Gaussian the distribution is in terms of asymmetry and shape. It then calculates how far each of these values differs from the value expected with a Gaussian distribution, and computes a single P value from the sum of these discrepancies. It is a versatile and powerful normality test, and is recommended. Note that D\'Agostino developed severalnormality tests. The one used by Prism is the \"omnibus K2\" test.

·         Shapiro-Wilk test. This test works very well if every value is unique, but does not work well when there are ties. The basis of the test is hard for nonmathematicians to understand. For these reasons, we prefer the D\'Agostino-Pearson test, even though the Shapiro-Wilk test works well in most cases.

·         Kolmogorov-Smirnov test, with the Dallal-Wilkinson-Lilliefor corrected P value. It compares the cumulative distribution of the data with the expected cumulative Gaussian distribution, and bases its P value simply on the largest discrepancy. This is a very crude way of quantifying deviations from the Gaussian ideal, and doesn\'t do a good job of discriminating whether or not your data was sampled from a Gaussian distribution. RB D\'Agostino1 says \"The Kolmogorov-Smirnov test is only a historical curiosity. It should never be used.\" We agree. Note that versions of Prism prior to 4.01 and 4.0b inappropriately reported the Kolmogorov-Smirnov P value directly (without the Dallall-Wilkinson-Lilliefor correction).

·         68.3% of the population is contained within 1 standard deviation from the mean.

·         95.4% of the population is contained within 2 standard deviations from the mean.

·         99.7% of the population is contained within 3 standard deviations from the mean.

These three figures should be committed to memory if you are a Six Sigma GB/BB.

These three figures are often referred to as the Empirical Rule or the 68-95-99.5 Rule as approximate representations population data within 1,2, and 3 standard deviations from the mean of a normal distribution.

Over time, upon making numerous calculations of the cumulative density function and z-scores, with these three approximations in mind, you will be able to quickly estimate populations and percentages of area that should be under a curve.

Most Six Sigma projects will involve analyzing normal sets of data or assuming normality. Many natural occurring events and processes with \"common cause\" variation exhibit a normal distribution (when it does not this is another way to help identify \"special cause\").

This distribution is frequently used to estimate the proportion of the process that will perform within specification limits or a specification limit (NOT control limits - call that specification limits and control limits are different).

However, when the data does not meet the assumptions of normality the data will require a transformation to provide an accurate capability analysis. We will discuss that later.

The mean is used to define the central location in a normal data set and the median, mode, and mean are near equal. The area under the curve equals all of the observations or measurements.

Throughout this site the following assumptions apply unless otherwise specified:

P-Value < alpha risk set at 0.05 indicates a non-normal distribution although normality assumptions may apply. The level of confidence assumed throughout is 95%.

P-Value > alpha risk set at 0.05 indicates a normal distribution.

The z-statistic can be derived from any variable point of interest (X) with the mean and standard deviation. The z-statistic can be referenced to a table that will estimate a proportion of the population that applies to the point of interest.

Recall, one of two important implications of the Central Limit Theorem is, regardless distribution type (unimodal, bi-modal, skewed, symmetric), the distribution of the sample means will take the shape of a normal distribution as the sample size increases. The greater the sample size the more normality can be assumed.

Some tables and software programs compute the z-statistic differently but will all get the correct results if interpreted correctly.

Some tables incorporate single-tail probability and another table may incorporate double-tail probability. Examine each table carefully to make the correct conclusion.

The bell curve theoretically spreads from negative infinity to positive infinity and approaches the x-axis without ever touching it, in other words it is asymptotic to the x-axis.

The area under the curve represents the probabilities and the whole area is estimated to be equal to 1.0 or 100%.

The normal distribution is described by the mean and the standard deviation. The formula for the normal distribution density function is shown below (e = 2.71828):

Due to the time consuming calculations using integral calculus to come up with the area under the normal curve from the formula above most of the time it is easier to reference tables.

With pre-populated values based on a given value for \"x\", the probabilities can be assessed using a conversion formula (shown below) from the z-distribution, also known as the standardized normal curve.

The z-distribution is a normal distribution with:

A z-score is the number of standard deviations that a given value \"x\" is above or below the mean of the normal distribution.

·        

 Identify three tasks (e.g., Use of Tool/Method) that are typically done during the Measure Phase of DMAIC and briefly summarize the purpose for performing each
 Identify three tasks (e.g., Use of Tool/Method) that are typically done during the Measure Phase of DMAIC and briefly summarize the purpose for performing each
 Identify three tasks (e.g., Use of Tool/Method) that are typically done during the Measure Phase of DMAIC and briefly summarize the purpose for performing each

Get Help Now

Submit a Take Down Notice

Tutor
Tutor: Dr Jack
Most rated tutor on our site