Natwar Lal last won the day on April 13
Natwar Lal had the most liked content!
Profile Information
- Name
Natwar Lal
1,359 profile views
Anisha Khanna
SIDDHESHWAR JANGID
Sianne Dsouza
Mayank Gupta
Pradeep Shukla
Natwar Lal's Achievements
Contributor (5/14)
40
Reputation
- Activity
- Images
darryl collins started following Natwar Lal
4-Eyes Principle
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
4 Eyes Principle - it is a risk control method where a set of 4 eyes (or 2 people) must approve or check something before it could be done. The fact that no human being is perfect led to the use and popularity of method. The concept is simple - the odds of two different people making the same mistake at the same time are very very small and NOT ZERO. This is the reasons that there have been instances where some errors have happened even when 2 or more people have checked the same thing. If implemented in the process, will it be value adding or non-value adding? Ideally, it will be a non value adding activity. However there are instances where the customers are willing to pay for multiple people checking the same thing. In such scenarios, 4 or 6 or even 8 eye checks become value adding. Barring it, 4 eyes principle is a non-value adding activity usually made mandatory by the regulator for safety concerns and hence is classified as value enabling activity. Examples where 4-Eyes principle is value adding 1. Authors usually want multiple reviews (copy edits, proof reads etc.) of their work before publishing and they are willing to pay for such reviews. 2. Managed services (outsourcing work), clients sometime warrant dual data entry and pay for the same (imagine the cost arbitrage - cost of 2 outsourced FTEs is less than 1 onshore FTE) 3. Patients willingly take second opinions before major medical procedures. In the above it is clear that the customer is willing to pay for the multiple reviews or checks. Examples where 4-Eyes principle is non-value adding 1. Banking transactions need to be approved by 2 or more people depending on the ticket value (usually called as maker-checker process) 2. Presence of 2 pilots in the cockpit. Both should check and confirm the same thing before an action is taken 3. Closing of doors on the plane. 2 crew members should check and confirm it 4. Presence of team of doctors and nurses during surgeries. Doctors ask for the instrument by calling its name, the junior doctors or nurses hands over the instrument by calling its name again. Double confirmation that correct instrument is being used 5. Presence of two people for opening of bank safes and lockers All these examples have a cost of failure and hence 4-Eyes principle is implemented so that the risk of failure is minimized. In such cases this becomes an example of a value enabler activity. Example where 4-Eyes principle is a complete waste 1. Putting additional layers of audits in service industry because of customer complaints and escalations
- 7 replies
1
- 4 eyes principle
- value adding
- (and 2 more)
Tagged with:
- 4 eyes principle
- value adding
- non value adding
- duality principle
Pseudo-continuous data
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Pseudo continuous data as the name suggests is pseudo continuous i.e. it is actually not continuous (or discrete) but it considered as continuous. Advantages 1. More powerful analytical tools can be used on the data 2. Continuous data tends to follow normal distribution and if it does, we could apply its properties 3. As a Lean Six Sigma practitioner, you need to remember less number of tools :) Dis-advantages 1. Conversion of statistical solution to practical output have a chance of going wrong as the properties of discrete and continuous data are different 2. Misinterpretation and misuse of tools and techniques Guidelines to consider discrete data as continuous 1. There are many (read as uncountable) possible values of discrete data. This is one reason why percentage is usually considered as pseudo continuous. Considering discrete data as pseudo continuous is a powerful method that can be used aby LSS experts in data analysis. However discretion is required by the project leader in using this method.
- 2 replies
1
- data types
- pseudo continuous
Control-Impact Matrix
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Control Impact Matrix is a 2D tool which helps in comparing items against two parameters 1. Control that we have over the items 2. Impact (expected) that the item could have on solving the problem In a DMAIC project, this tool is primarily used in Analyze phase for prioritizing the causes that can be focused on. Typically the priority order is as follows 1. Causes in High Control High Impact 2. Causes in High Control Low Impact 3. Causes in Low Control High Impact 4. Causes in Low Control Low Impact There is debate on the order of points 2 and 3. However, I feel point 2 should have higher priority wrt to point 3. It may also be used in Improve phase to prioritize for solutions however, there is another matrix tool that is more suitable for this purpose. We could use an Effort Impact Matrix for solution prioritization. Therefore, I feel that Control Impact matrix is better suited for Analyze phase.
- 3 replies
- control-impact matrix
Process Door vs Data Door
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
In order to answer the question 'Why is the problem occurring?", below steps are done in Analyze phase 1. List all potential causes 2. Analyze potential causes 3. Identify critical causes There are 2 approaches that can be deployed for analyzing potential causes. I have summarized both in the table below P.S. the usage of tools is not exclusive i.e. tools can be used either for process or data door depending on the situation. The table only highlights the preferred or the most commonly used tools.
- 5 replies
1
- analyze phase
- process door
- (and 1 more)
Tagged with:
- analyze phase
- process door
- data door
Golden Ratio
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Let us consider two numbers a and b where a is greater than b. If the ratio of the sum of these numbers (i.e. a+b) to the larger number (i.e. a) is same as the ratio of the numbers (i.e a is to be), then these two numbers are said to be in a Golden Ratio. Golden Ratio => (a+b) / a = a / b This is denoted by Greek letter phi ( or ). This ratio comes to an irrational number = 1.618 Applicability of Golden Ratio is found in 1. Nature - sunflower, position of leaves 2. Architecture 3. Art 4. Music 5. Technical Analysis of Stocks 6. Design Thinking - laptop screens, mobile screen size 7. Book layouts and publishing 8. Logo designs - Twitter and Apple 9. Computer algorithms 10. Mathematics - fibonacci series 11. Geometry - spiral shapes
- 4 replies
1
Interrelationship Diagram
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Interrelationship Diagram is a tool that helps identify the drivers and effects (same as Cause and Effect). Prior to using this tool, one would need to identify the potential causes (use any of the tools - Cause and Effect Diagram or an Affinity Diagram). In C&E diagram, the relationship is well established, however, in an Affinity diagram we only get clustered categories of similar ideas. The relationships between these clusters may not be evident. Hence the need for an interrelationship diagram. Instances where this could be used 1. Establish cause and effect relationships between lot of categories (could be a mix of multiple causes and multiple effects) 2. Establish the root cause or the cause that causes many effects. Sequencing can give us insights on this
- 7 replies
- interrelationship diagram
- cause - effect relationship
Box-Cox Transformation
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Box-Cox Transformation is the most commonly used method to transform non normal data to normal data. It transforms the original data by applying a power to it (usually denoted by lambda). The value of lambda varies from -5 to 5. Why will we need to transform the data? Short answer to the long theory is because of following two reasons 1. Properties of normal distribution 2. Normality is a pre-requisite condition for parametric statistical analysis If we expect the data to be normally distributed, but it is not, then before we apply the transformation, we should first check for data entry issues. But then most of the times the process data does not tend to follow normal distribution and hence the transformations come in handy. Analysis that can be performed after applying Box-Cox transformation 1. Stability Analysis - one of the pre-requisite for continuous data control charts is that the data should follow normal distribution 2. Capability analysis - the original data will get transformed, however the capability of the process is still usable. If one knows the underlying distribution of the data, then this transformation may not be required, however not everyone knows the multiple types of distributions 3. Regression analysis (or any of its variants) where the residuals are non normal due to heteroscedasticity (i.e. data does not have constant variance) Analysis that should not be performed after applying Box-Cox transformation 1. Descriptive Statistics - there are measures that can handle non-normal data (Median and IQR) 2. Inferential Statistics - there are non-parametric tests (median tests) that can be performed for non-normal data. These tests do not require one to understand the underlying distribution and are robust enough to handle non-normal data
- 6 replies
Ansoff Matrix
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Ansoff Matrix is a marketing tool that is used for deciding the expansion or the growth strategy of an organization. It has been named after Igor Ansoff - the person who first proposed it. In the traditional matrix, there are 4 options for growth. These 4 options are bases on two parameters - Markets and Products. Source: https://www.google.com/search?q=ansoff+matrix&rlz=1C1JZAP_enIN732IN732&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjSi_jlq_noAhXBX3wKHY1jACEQ_AUoAXoECBMQAw&biw=1366&bih=657#imgrc=KKD2JCntBwvxYM The 4 strategies are 1. Market Development - develop new markets for your existing products 2. Market Penetration - increase market share in the existing market for the existing product 3. Diversification - develop new markets for new products 4. Product Development - develop new product for the existing market The extended Ansoff Matrix looks like below Source:https://www.researchgate.net/figure/b-Extended-Ansoff-matrix-The-fourth-activity-is-related-to-planning-of-new-business_fig2_334965141
- 6 replies
- ansoff model
- business development
PICK Chart
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
PICK stands for Possible, Implement, Challenge and Kill PICK chart is basically a visual tool used for decision making. This tool could be used after brainstorming and may be used not only in Define phase but other phases of Six Sigma Project as well. In fact one could use it without doing a Six Sigma project as well. PICK chart helps ideas that are Possible, Implement, Challenge and Kill basis two criteria 1. Effort in implementation 2. Pay off after implementation Depending on the quadrant where a particular idea falls, that specific action is taken. Areas where this could be useful for businesses 1. Project selection 2. Solution selection 3. Investment option selection 4. Strategic decision making
- 8 replies
- pick chart
Ben Franklin Effect
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
As per Wikipedia, Ben Franklin effect is - a person who has already performed a favor for another is more likely to do another favor for the other than if they had received a favor from that person. This effect appears to be the result of cognitive dissonance, because if we have done a favor for someone, how can we possibly not like the person. Given that this phenomenon works, businesses or individuals could use it to their advantage in following ways 1. Good inter-personal relations 2. Good client vendor relations 3. Customer loyalty 4. Good supplier relations 5. Good team bonding In all the above cases, the approach would be to first ask for a favor from the other party. One the other party obliges and fulfills the favor, they will inherently start liking you (as per the Ben Franklin effect) resulting in good relationships.
- 5 replies
- benfranklineffect
Scrumban
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Agile development methodology is a process where the product or the service is build up in an incremental and iterative manner. This methodology is successful because of following reasons 1. It breaks the total work into smaller units - called sprints 2. Frequent review of the progress of the work on smaller units - called scrum Sprints are supposed to be closed within a specific time frame and because these are short duration projects, the review and decision making has to be swift. This is where scrum gets introduced. Scrum is a daily team huddle where the progress of the sprint is reviewed and course corrections are done (if required). Kanban on the other hand is a signal from the downstream process to the upstream process to give them work. Scrumban - is the combination of Scrum and Kanban and thereby it utilizes the benefits of both scrum and kanban. The key benefits of using Scrumban in a software development are 1. Changing customer requirements could be managed well 2. The flow of the sprints is made visible using kanban which makes it easier for the team to understand their priorities and deliverables 3. Reduced chances of a deliverable getting missed 4. Better management of backlog items
- 5 replies
- agile
- kanban
- (and 2 more)
Tagged with:
- agile
- kanban
- scrum
- scrumban
Exponential function
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Excellent way of applying the teachings with the current affairs - using Time Series and Forecasting to forecast the number of new cases for Coronavirus. Basis my research (and I am sure by now everyone knows), that pandemics follow an exponential growth. So, when governments say they want to flatten the curve, they basically mean that the exponential growth should be controlled. Exponential growth happens when base grows not be addition or multiplication but by powers. A to the power of B is an example of exponential growth. E.g. let's assume 1.01 to the power of 2 Day 1 => 1.01 to power of 2 = 1.0201 Day 2 => 1.0201 to power of 2 = 1.04 Day 3 => 1.04 to power of 2 = 1.08 ... ... ... Day 10 => value becomes 26612.57 Initially the growth is relatively smaller, but as the time passes, the exponential growth results in very high numbers. Regarding the forecasts for Coronavirus, I picked up the actuals data that was published. Picked up the data from 15th Mar as you are using that as the base value (as below) After running the trend analysis in Minitab for Exponential Growth and using the same for forecasting, below are the results. Growth Model for 26th March forecast Using the above growth model, the forecasted value for 26th March = 60064 Doing the same analysis for 27th March, but this time added the actual figure for 26th March. Growth model for 27th March Using the above growth model, the forecasted value for 27th March = 69336
- 3 replies
- exponentialfunction
Confidence Interval vs Prediction Interval
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Confidence Interval is the interval in which the population mean is supposed to fall. Confidence Intervals are determined in all hypothesis tests as we infer something about the population from the sample. Prediction Interval is the interval in which an individual value is supposed to fall. Prediction Intervals are determined when we use statistical tools for predictions. Since Confidence Intervals are estimates for means, there are less chances of going wrong and hence it is smaller. On the contrary, since prediction intervals are point estimates, there are higher chances of going wrong and hence it is bigger than confidence interval. Examples 1. Estimating the Sensex or Nifty level at the month end will be like determining Confidence Intervals, whereas, estimating the price of a particular stock at month end is like determining Prediction Intervals 2. Confidence Interval - estimating the overall sales for the product mix. Prediction Interval - estimating the sales for a specific product Regression analysis when used for forecasting or predictions will yield both confident and prediction intervals. Usage of one over the other would depend on the output / variable that the organization is forecasting for. I would believe that most organizations work with confidence intervals, while prediction levels give them an indication of the best and the worst case scenarios.
- 4 replies
1
- confidence interval
- prediction interval
Logical Subgrouping and Capability Analysis
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Process Capability Assessment is the main step in Measure phase where the Baseline Metric is calculated. Following are the metrics that can be used for assessment 1. Sigma Level Long Term (Zlt or Zoverall) and Sigma Level Short Term (Zst or Zwithin) 2. Pp, Ppk (using overall standard devaition) and Cp,Cpk (using within standard deviation) 3. DPMO, DPU and Defective % Zwithin uses the within standard deviation for calculation while Zoverall uses overall standard deviation for calculation. The difference between within and overall standard deviation is how you perceive the collected data. If the entire data set is (or the population data) is used, it results in Overall Standard Deviation. While if we divide the entire data into rational subgroups then we get Within Standard Deviation (which is also known as Pooled standard deviation) Another common method to understand the difference within overall standard deviation is when only common cause variation is considered overall standard deviation is when both common cause and special cause variation is considered Sub-grouping or Rational subgroups is the collection of data under similar process conditions thereby resulting in lesser variation leading to the following concept within standard deviation < overall standard deviation Following are few scenarios where sub-grouping is NOT preferred 1. Rational sub-groups do not make sense while working with discrete data. For e.g. if we do weekly sub-groups and are collecting data on defects. For a particular week, if there are no defects (though unlikely but still), then within standard deviation will be 0. Hence does not make much sense to use sub-grouping when dealing with discrete data. On the contrary, one should check for possibility of sub-grouping in case of continuous data 2. Consistent and standardized process that does not change very often. E.g. Temperature control for stem cells. Assuming that it is maintained at -4 Celsius, it is unlikely that it will show a lot of common cause variation. In such cases, even if we do sub-grouping, the variation within and overall will be more or less same (unless there was a presence of a special cause) 3. Project scope deals with a specific product or service being delivered to a specific client. E.g. delivery time of same kind of pizza by only one pizza outlet and to a specific corporate customer (assuming this corporate customer orders almost on a daily basis and orders the same pizza everytime from the same outlet) 4. All process inputs are well controlled. If all the process inputs are all well controlled, then there are less chances of variation in the process. In such a scenario, one could avoid doing rational sub-grouping. Closest example I can think of is the process of making a burger at McDonald's. All the process inputs are well controlled and hence we get the same taste of the burger. One could argue that it is not a perfect example. And I tend to agree because it is very difficult to find a process where all inputs could be controlled. There will always be fatigue, wear and tear etc. Like they say, there is no "perfect process" Important thing to note here is that irrespective of whether you do sub-grouping or not, one should be consistent with the approach for doing a pre vs post project comparison. If baselined with Zwithin, then compare the improvement with Zwithin only. P.S. - If all of this is too tedious, one could simply use the empirical formula Zwithin = Zoverall + 1.5 (however, one should remember that if the data is continuous, both these can be determined independently as well)
- 3 replies
2
Genchi Genbutsu
Natwar Lal replied to Vishwadeep Khatri's question in We ask and you answer! The best answer wins!
Genchi Genbutsu - "Go and See" to investigate the issue and truly understand the customer situation. It basically refers to go and observe the process where the actual value is being added. As the question suggests, it makes perfect sense to use in in manufacturing however it is a myth that it is only used in manufacturing. As a concept Genchi Genbutsu is domain and industry agnostic. While preparing process maps, we usually tell the participants to create a map of "What the process is" and not "What it should be" or "what you think it is". One of the best means of understanding "What the process is" is to pick up a transaction and do a walkthrough of the process with it. This is Genchi Genbutsu for you as when you do a walkthrough of the process with the transaction you actually go to the process and see how it works. I am providing some examples below where the idea is same "Go and See". 1. Issue Resolution: when you raise an issue, the first thing that the agent / engineer will do is try to replicate the issue. They might do a screen share or take control of your computer and replicate the issue to understand where to attack and what to do 2. Software Testing: The first one happens when the code is compiled. The compiler does a walkthrough of the entire code and highlights the section of the codes that could not be compiled due to incorrect coding. Second happens during the multiple stages of testing - unit testing, integration testing and UAT. If a particular test case fails and the code is sent back to developer, the developer will first recreate the situation to see the failure (this is Genchi Genbutsu) 3. Medical conditions: Various invasive and non-invasive screening methods are used to first go to the specific location in the body and see the extent of the problem. E.g. X-ray, MRI, CT-scans, angiography etc. 4. Servicing of car: when you take your car for its regular service, the mechanic will first take a test drive of the car. What he is trying to do is to get a feel of how the car is driving so that he could pinpoint the issue which he will not be able to do unless he drives it himself.
- 4 replies
1
- genchi gembutsu