I’ve heard it countless times from my Six Sigma* Master: “Variance is the enemy.” Well, for the sake of simplicity, let’s call it “variability” today, even though it’s not exactly the same thing. We won’t delve into statistics here; we’ll keep it light for this blog post.
Variability refers to unevenness, inconsistency, or deviation from the normal process. Or more specifically the causes creating said unevenness, inconsistency, or deviation. That’s why things spiral out of control and why they are hard to predict and manage. variability is a problem in your research, as you well know, when it is not reflective of biological differences but due to changes in executing a process or measurement. We’re not talking about research here although it carries all the way through.
We’re talking about your time, why you rarely have enough of it and why research gets delayed.
Variability, as we’ll explain below, is the most common reason why processes break down, leaving you burning the midnight oil.
So, it’s fair to dub variability an “enemy.”
Imagine this scenario: You own a bakery and with your well-oiled processes and skilled staff, you can serve one pastry aficionado every minute. On average, one customer arrives every minute.
Quiz question: How long is the average waiting time for your customers?
Well, you can’t predict it because you have no control over when your customers arrive. If they show up like clockwork, one each minute, the waiting time is zero. You serve one customer, then the next one arrives and so on.
But what happens when a bus pulls up, unloading 50 customers with an insatiable hunger for your pastries? The average waiting time (and consequently the dissatisfaction) skyrockets. In this example, the average waiting time jumps to 25 minutes, leaving the last customer in the queue fuming with nearly an hour of waiting time.
If this happens for the first time or only once, it’s an unpredictable deviation from the process.
You can’t prepare for it.
The only thing you can do is firefight. In this case, as the owner, you might jump in to help serve or offer some treats to the waiting customers.
Similar occurrences happen in a lab or research team setting quite often.
Suddenly, something doesn’t work as it should. Maybe your centrifuge breaks down, or a team member falls ill on the day they were supposed to represent the lab at the departmental retreat. For such situations, you can only prepare by budgeting some firefighting time in your team’s schedule. You don’t know when or how often these issues will arise, or how much time they will consume. But by accounting for potential problems in your schedule, you can handle them with greater ease. And if nothing goes wrong, you gain a few extra hours to work on something else or even find the time to get that long-awaited massage.
The second type of variability is easier to address, at least in theory.
This is when you know there will be a deviation from the mean. However, even in this case, we need to distinguish between two different types.
- When you have some influence over the deviation.
The most common example of this is when a process is executed differently by each team member. A prime example is the documentation of experiments. In such cases, you have the power to change things for the better. Establishing a common standard for documentation, such as following a standard operating procedure, using a single template, or implementing a peer review process, can help reduce variability. When you’re considering ways to improve efficiency, targeting this type of variability should be your priority because it is unwanted, has potentially dire consequences (e.g. if not all important information is captured), and can be changed.
- When you have no influence over the deviation.
For instance, based on past experience, you’ve learned that a supplier can’t guarantee shipping a certain reagent within a week. Sometimes it takes two weeks, or even four. However, you’re not completely helpless in this situation either. You can make your processes robust to handle this variability. In the ordering example, you could place an order four weeks in advance, ensuring you receive the reagents on time but possibly a few weeks earlier. Alternatively, you can raise the safety threshold before initiating an order, accounting for possible delays. Of course, this approach comes with the disadvantage of having to maintain a larger stock of reagents. As you see, controlling this variability requires a trade-off. The reagents are available, and you can conduct your experiments on schedule, but at the cost of having to maintain an emergency stock.
Another scenario here is working with collaborators. We all know that some are quite predictable with quick or reasonable turnaround times while others are quite unpredictable. Think about making the processes that you control robust to account for this variability. What this meant for Stefanie was to plan on doing critical aspects of work within their team instead of the collaborators contribution. Or it required reminders early and often. It required setting deadlines that were not the actual deadlines to account for the fact that they would receive contributions late. Here the cost is the additional time required to do it yourself in return for the benefit of being able to move forward or the effort required to remind the team (you could also call it manage or lead the team).
If you calculate with deviation from what should be the norm and adjust your processes rather than arguing with reality, you make it easier on yourself.
To summarize:
1. Unpredictable variability: Account for firefighting in your time budget.
2. Predictable variability that you can change: Improve your processes. Harmonization is key to increasing quality, efficiency, and effectiveness.
3. Predictable variability that you can’t change: Make your processes robust. Consider the desired outcome and what you are willing to sacrifice.
And as a final piece of advice: Don’t trust the mean. On average, you and your dog have three legs, but variability makes this statement incorrect for both of you! 😊
Robert
*Six Sigma is a disciplined, data-driven methodology used to improve the quality of processes and reduce defects or errors within organizations. It was originally developed by Motorola in the 1980s and later popularized by companies like General Electric. The goal of Six Sigma is to achieve near-perfect performance by systematically identifying and eliminating variabilitys in processes.