How Can I Improve My Forecast Accuracy?


Imagine that Amanda (a completely imaginary person) is a demand planner at Kool Komfort Foods (a completely imaginary company, also branded as K2), a nationwide producer of healthy comfort foods.  She got her bachelor’s in Mechanical Engineering about five years ago.  After a short stint in manufacturing process engineering in another industry, she got interested in the business side of things and moved into supply chain planning, starting in demand planning.  After taking a couple of on-line courses and getting an APICS certification, she seized on the opportunity to be a junior demand planner at K2.  Through her affinity for math and her attention to detail, Amanda earned a couple of promotions and is now a senior demand planner.  At present, she currently manages a couple of product lines, but has her sights set on becoming a demand planning manager and mentoring others.

Amanda has been using some of the common metrics for forecast accuracy, including MAPE (mean absolute percentage error) and weighted MAPE, but the statistical forecast doesn’t seem to improve and the qualitative inputs from marketing and sales are hit or miss.  Her colleague, Jamison, uses the standard deviation of forecast error to plan for safety stock, but there are still a lot of inventory shortages and overages.  

Amanda has heard her VP, Dmitry, present to other department heads how good the forecast is, but when he does that, he uses aggregate measures and struggles when he is asked to explain why order fill rate is not improving, if the forecast is so good.

Amanda wonders what is preventing her from getting better results at the product/DC level, where it counts.  She would love to have it at the product/customer or product/store level, but she knows that she will need better results at the product/DC level before she can do that.  She is running out of explanations for her boss and the supply planning team.  She has been using some basic forecasting techniques that she has programmed into Excel, like single and double exponential smoothing as well as moving average, and even linear regression.  She is sure the math is correct, but the results have been disappointing. 

Amanda’s company just bought a commercial forecasting package.  She was hoping that would help.  It is supposed to run a bunch of models and select the best one and optimize the parameters, but so far, the simpler models perform the best and are no better – and sometimes worse – than her Excel spreadsheet.

Amanda has been seeing a lot of posts on LinkedIn about “AI”.  She has been musing to herself about whether there is some magic bullet in that space that might deliver better results.  But, she hasn’t had time to learn much about the details of that kind of modeling.  In fact, she finds it all a bit overwhelming, with all of the hype around the topic.

And, anyway, forecasts will always be wrong, they will always change, and the demand planner will always take the blame.  Investments in forecasting will inevitably reach diminishing returns, but for every improvement in forecast accuracy, there are cascading benefits through the supply chain and improvements in customer service.  So, what can Amanda and her company do to make sure they are making the most of the opportunity to anticipate market requirements without overinvesting and losing focus on the crucial importance of developing an ever more responsive value network to meet constantly changing customer requirements?

Unfortunately, there really is no “silver bullet” for forecasting, no matter how many hyperbolic adjectives are used by a software firm in their pitch.  That is not to say that a software package can’t be useful, but you need to really understand what you need and why before you go shopping.  

Demand planning consists of both quantitative and a qualitative analysis.  Since the quantitative input can be formulated and automated (not that it’s easy or quick), it can be used for calculating and updating a probabilistic range for anticipated demand over time. 

A good quantitative forecast requires hard work and skilled analysis.  Creating the best possible quantitative forecast (without reaching diminishing returns) will provide a better foundation for, and even improve, qualitative input from marketing, sales, and others.


One of the first things you need to do is understand the behavior of the data.  This requires profiling the demand by product and location (either shipping plant/DC or customer location – let’s call that a SKU for ease of reference) with respect to volume and variability in order to determine the appropriate modeling approach.  For example, a basic approach is as follows: High volume, low variability SKU’s will be easy to mathematically forecast and may be suited for lean replenishment techniques.  

  • Low volume, low variability items may be best suited for simple reorder point replenishment.
  • High volume, high variability SKU’s will be difficult to forecast and may require a sophisticated approach to safety stock planning.
  • Low volume, high variability SKU’s may require a thoughtful postponement approach, resulting in an assemble or make-to-order process.  
  • A more sophisticated approach would involve the use of a machine learning for classification that might find clusters of demand along more dimensions.

Profiling analysis can be complemented nicely by a Quantitative Reasonability Range Check (see below), which should be an on-going part of your forecasting process.

Once you have profiled the data, you can start to develop the quantitative forecast, but you will need to consider the questions:

  1. What is the appropriate level of aggregation for forecasting?
  2. What forecast lag should I use?
  3. How frequently should I forecast?
  4. What are the appropriate quantitative forecast models?
  5. How should I initialize the settings for model parameters?
  6. How should I consume the forecast?
  7. How will I compensate for demand that I couldn’t capture?
  8. What metrics should I use to measure forecast accuracy?

Let’s consider each of these questions, in turn.

A. Level of Aggregation

The point of this analysis is to determine which of the following approaches will provide you with the best results:

  • Forecasting at the lowest level and then aggregating up
  • Forecasting at a high level and just disaggregating down
  • Forecasting at a mid-level and aggregating up and, also, disaggregating down

B. Correct Lag

If you forecast today for the demand you expect tomorrow, you should be pretty accurate because you will have the most information possible, prior to actually receiving orders.  The problem with this is obvious.  You can’t to react to this forecast (which will change each day up until you start taking orders for the period you are forecasting) by redistributing or manufacturing product because that takes some time.

Since you cannot procure raw materials, manufacture, pack, or distribute instantly, the “lead time” for these activities needs to be taken into account.  So, you need to have a forecast lag.  For example, if you need a month to respond to a change in demand, then, you would need to forecast this month for next month.  You can continue to forecast next month’s demand as you move through this month, but it’s unlikely you will be able to react, so when you measure forecast accuracy, you need to measure it at the appropriate lag.

C. Frequency

Should you generate a new forecast every day? Every week?  Or, just once a month?  This largely depends on when you can get meaningful updates to your forecast inputs such as sales orders, shipment history, or updates to industry and any syndicated or customer data (whether leading or trailing indicators) that are used in your quantitative forecast.

D. Appropriate Forecasting Model(s)

So, what mathematical model should you use?  This is a key question, but as you can see, certainly not the only one.

The mathematical approach can depend on many factors, including, but not limited to, the following:

  • Profiling (discussed above)
  • Available and meaningful trailing and leading indicators
  • Amount of history needed for the model vs. history that’s still relevant
  • Forecasting a distribution of demand vs. forecasting the actual distribution 
  • Explainability vs. accuracy of the model
  • The appearance of accuracy vs. useful accuracy (overfitting a model to the past)
  • Treatment of qualitative data (e.g., geography, holiday weekends, home football game, etc.)

A skilled data scientist can be a huge help.  A plethora of techniques is available, but a powerful machine learning (or other) technique can be like a sharp power tool.  You need to know what you’re doing and how to avoid hurting yourself.

E. Initializing the Steady State Settings for Parameters

Failure to properly initialize the parameters of a statistical model can cause it to underachieve.  In the case of Holt-Winters 3 parameter smoothing, for example, the modeler needs to have control over how much history is used for initializing the parameters.  If too little history is used, then forecasts will likely be very unreliable. 

When it comes to machine learning, there are two kinds of parameters – hyperparameters and model parameters.  Training can optimize the model parameters, but knowledge, experience and care are required to select techniques that are likely to help and to set the hyper parameters for running models that will give you good results.

F. Forecast Consumption Rules

There are a few things to consider when you consume the forecast with orders.  For example, you might want to bring forward previously unfulfilled forecasts (or underconsumption) from the previous period(s), or there may be a business reason to simply treat consumption in each week or month in isolation.

You may want to calculate the forecast consumption more frequently than you generate a new forecast.

G. Compensating for Demand You Couldn’t Capture

This is a particular challenge in the retail and CPG industries.  In CPG, many orders from retail customers are placed and fulfilled on a “fill or kill” basis.  The CPG firm fulfills what it can with the inventory on hand and then cancels or “kills” the rest of the order.

In retail, a consumer may simply go to a competitor or order online if the slot for the product on the shelf in a given store is empty.

In either case, sales or shipment history will under-represent true demand for that period.  If you don’t accurately compensate for this, your history will likely drive your forecast model to under-forecast.

H. Metrics and Measurement

There are many measures of forecast accuracy that can be used.  A couple of key questions to answer include the following:

  1. Who is the audience and what is their interest?  Consider the sales organization which is interested in an aggregate measure of sales against their sales target, perhaps by sales group or geography.  On the other hand, customer service doesn’t really happen in aggregate.  If you want to have better customer service, you need to look at forecast accuracy at the SKU level.
  2. Are you measuring forecast error based on an assumed normal distribution that you have defined by projecting a mean and standard deviation?  Or, have you been able to use the actual distribution of forecast error, perhaps created through bootstrapping? 

Remember that you will need to measure forecast error at the correct lag.

Another thing you may need to keep in mind is that not everyone has been trained to understand forecast error and its interrelationship to inventory, safety stock, and fill rate.  You may have a bit of education to do from time to time, even for executives.

Price & Forecast

In most cases, demand is elastic with respect to price.  In other words, there is a relationship between what you charge for something and the demand for it.  This is why consumer packaged goods companies run promotions and fund promotions with retailers, and also, why retailers run their own promotions.  The goal is to grow sales without losing money and/or gain market share (possibly, incurring a short-term loss).  The overall goal is to increase gross margin in a given time period.  Many CPG companies make competing products – think of shampoo or beverages, or even automobiles or car batteries.  And, of course, retailers sell products from their CPG suppliers that compete for shelf space and share of wallet.  Many retailers even sell their own private label goods.  The trick is how to price competing products such that you gain sales and margin over the set of products.  

Just as in forecasting demand, there are both quantitative and qualitative approaches to optimizing pricing decisions which, then, in turn, need to be incorporated into the demand forecast.  The quantitative approach has two components:

  1. Using ML techniques to predict prie elasticity, considering history, future special events (home football game, holiday weekend, football team in playoffs, etc.), minimum and maximum demand, and perhaps other features.
  2. Optimizing the promotional offers so that margin is maximized.  For this, a mathematical optimization model may be best so that the total investment in promotional discount and allocations of that investment are respected, limits on cannibalization are enforced, and upper limits on demand are considered.

The Quantitative Reasonability Range Check

There is a process that should be part of both your demand planning and your sales and operations planning.  The concept is simple – how do you find the critical few forecasts that require attention, so that planner brainpower is expended on making a difference and not hunting for a place to make a difference?  A Quantitative Forecast Reasonability Range Check (or maybe QRC, for short) accomplishes this perfectly.  If the historical data is not very dense, then a “reasonability range” may need to be calculated through “bootstrapping”, a process of randomly sampling the history to create a more robust distribution.   Once you have this distribution, you can assign a probability to a future forecast and leverage that probability for safety stock planning as well.

At a minimum, a QRC must consider the following components:

  • Every level and combination of the product and geographical hierarchies
  • A quantitative forecast
  • An asymmetric prediction interval over time
  • Metrics for measuring how well a point forecast fits within the prediction interval
  • Tabular and graphical displays that are interactive, intuitive, always available, and current

If you are going to attempt to establish a QRC, then I would suggest five best practices:

Eliminate duplication.  When designing a QRC process (and supporting tools), it is instructive to consider the principles of Occam’s razor as a guide:

– The principle of plurality – Plurality should not be used without necessity

– The principle of parsimony – It is pointless to do with more what can be done with less

These two principles of Occam’s razor are useful because the goal is simply to flag unreasonable forecasts that do not pass a QRC, so that planners can focus their energy on asking critical questions only about those cases.

Minimize human time and effort by automating the math.  Leverage automation and, potentially, even cloud computing power, to deliver results that are self-explanatory and always available, providing an immediately understood context that identifies invalid forecasts. 

Eliminate inconsistent judgments.  By following #1 and #2 above, you avoid inconsistent judgments that vary from planner to planner, from product family to product family, or from region to region.

Reflect reality.  Calculations of upper and lower bounds of the prediction interval should reflect seasonality and cyclical demand in addition to month-to-month variations.  A crucial aspect of respecting reality involves calculating the reasonability range for future demand from what actually happened in the past so that you do not force assumptions of normality onto the prediction interval (this is why bootstrapping can be very helpful).  Among other things, this will allow you to predict the likelihood of over- and under-shipment.

Illustrate business performance, not just forecasting performance with prediction intervals.  The range should be applied, not only from time-period to time-period, but also cumulatively across periods such as months or quarters in the fiscal year.


Demand planning is both quantitative and qualitative.  In this paper, we have touched on the high points of the best practices for building a good quantitative forecasting foundation for your demand planning process.  In our imaginary case, Amanda still has some work to do, some of which lies outside of her expertise.  She will need to articulate the case for making an investment to improve the quantitative forecast and building a better foundation for qualitative input and a consensus demand planning process.  A relatively small improvement in forecast accuracy can have significant positive bottom and top-line impact.  

Amanda needs to convince her management to invest in a consulting service that will deliver the math, without the hype, and within the context of experience, so that she can answer the key quantitative questions every demand planner faces:

  • What is the profile of my demand data?
  • What is the appropriate level of aggregation for forecasting?
  • What forecast lag should I use?
  • How frequently should I forecast?
  • What are the appropriate quantitative forecast models?
  • How should I initialize the settings for model parameters?
  • How should I consume the forecast?
  • How will I compensate for demand that I couldn’t capture?
  • What metrics should I use to measure forecast accuracy?

“Moneyball” and Your Business


It’s MLB playoff time, and my team (the Tribe) is there, again.  (Pregnant pause to enjoy the moment.)

A while back, the film “Moneyball” showed us how the Oakland A’s built a super-competitive sports franchise on analytics, essentially “competing on analytics”, within relevant business parameters of a major league baseball franchise.  The “Moneyball” saga and other examples of premier organizations competing on analytics were featured in the January 2006 Harvard Business Review article, “Competing on Analytics” (reprint R0601H) by Thomas Davenport, who also authored the book by the same name.

The noted German doctor, pathologist, biologist, and politician, Rudolph Ludwig Karl Virchow called the task of science “to stake out the limits of the knowable.”  We might paraphrase Rudolph Virchow and say that the task of analytics is to enable you to stake out everything that you can possibly know from your data.

So, what do these thoughts by Davenport and Virchow have in common?

In your business, you strive to make the highest quality decisions today about how to run your business tomorrow with the uncertainty that tomorrow brings.  That means you have to know everything you possibly can know today.  In an effort to do this, many companies have invested, or are considering an investment, in supply chain intelligence or various analytics software packages.  Yet, many companies who have made huge investments know only a fraction of what they should know from their ERP and other systems.  Their executives seem anxious to explore “predictive” analytics or “AI”, because it sounds good.  But, investing in software tools without understanding what you need to do and how is akin to attempting surgery with wide assortment of specialized tools, but without having gone to medical school.

Are you competing on analytics?

Are you making use of all of the data available to support better decisions in less time?

Can you instantly see what’s inhibiting your revenue, margin and working capital goals across the entire business in a context?

Do you leverage analytics in the “cloud” for computing at scale and information that is always on and always current?

I appreciate everyone who stops by for a quick read.  I hope you found this both helpful and thought-provoking.

As we enter this weekend, I leave you with one more thought that relates to “business intelligence” — this time, attributed to Socrates:

“The wisest man is he who knows his own ignorance.”

Do you know what you don’t know?  Do I?

Have a wonderful weekend!

Does Your Demand Planning Process Include a “Quantitative Reasonability Range Check”?

There is a process that should be part of both your demand planning and your sales and operations planning.  The concept is simple – how do you find the critical few forecasts that require attention, so that planner brainpower is expended on making a difference and not hunting for a place to make a difference?  I’ll call it a Quantitative Forecast Reasonability Range Check (or maybe QRC, for short).  It may be similar in some ways to analyzing “forecastability” or a “demand curve analysis”, but different in at least one important aspect – the “reasonability range” is calculated through bootstrapping (technically, you would be bootstrapping a confidence interval, but please allow me the liberty of a less technical name – “reasonability range”).  A QRC can be applied across industries, but it’s particularly relevant in consumer products.

At a minimum, QRC must consider the following components:

  1. Every level and combination of the product and geographical hierarchies
  2. A quality quantitative forecast
  3. A prediction interval over time
  4. Metrics for measuring how well a point forecast fits within the prediction interval
  5. Tabular and graphical displays that are interactive, intuitive, always available, and current.

If you are going to attempt to establish a QRC, then I would suggest five best practices:

1.  Eliminate duplication.  When designing a QRC process (and supporting tools), it is instructive to consider the principles of Occam’s razor as a guide:

– The principle of plurality – Plurality should not be used without necessity

– The principle of parsimony – It is pointless to do with more what can be done with less

These two principles of Occam’s razor are useful because the goal is simply to flag unreasonable forecasts that do not pass a QRRC, so that planners can focus their energy on asking critical questions only about those cases.

2. Minimize human time and effort by maximizing the power of cloud computing.  Leverage the fast, ubiquitous computing power of the cloud to deliver results that are self-explanatory and always available everywhere, providing an immediately understood context that identifies invalid forecasts. 

3. Eliminate inconsistent judgments By following #1 and #2 above, you avoid inconsistent judgments that vary from planner to planner, from product family to product family, or from region to region.

4. Reflect reality.  Calculations of upper and lower bounds of the sanity range should reflect the fact that uncertainty grows with each extension of a forecast into a future time period.  For example, the upper and lower limits of the sanity range for one period into the future should usually be narrower than the limits for two or three periods into the future.  These, in turn, should be narrower than the limits calculated for more distant future periods.  Respecting reality also means capturing seasonality and cyclical demand in addition to month-to-month variations.  A crucial aspect of respecting reality involves calculating the sanity range for future demand from what actually happened in the past so that you do not force assumptions of normality onto the sanity range (this is why bootstrapping is essential).  Among other things, this will allow you to predict the likelihood of over- and under-shipment.

5. Illustrate business performance, not just forecasting performance with sanity ranges.  The range should be applied, not only from time-period to time period, but also cumulatively across periods such as months or quarters in the fiscal year.

If you are engaged in demand planning or sales and operations planning, I welcome to know your thoughts on performing a QRC.

Thanks again for stopping by Supply Chain Action.  As we leave the work week and recharge for the next, I leave you with the words of John Ruskin:

“When skill and love work together, expect a masterpiece.”

Have a wonderful weekend!

The Potential for Proven Analytics and Planning Tools in Healthcare Delivery

I’ve spent time in a hospital.  I was well cared for, but I didn’t like it, and I worried about the cost and how well I would be able to recover (pretty well, so far!)  Also, my daughter is a doctor (obviously takes after her mom!), so healthcare is obviously an area of high interest for me.

To say that managing a large, disaggregated system such as healthcare delivery with its multitude of individual parts, including patients, physicians, clinics, hospitals, pharmacies, rehabilitation services, home nurses, and more is a daunting task would be an understatement.

Like other service or manufacturing systems, different stakeholders have different goals, making the task even more challenging.

Patients want safe, effective care with low insurance premiums. 

Payers, usually not the patient, want low cost. 

Health care providers want improved outcomes, but also efficiency.

The Institute of Medicine has identified six quality aims for twenty-first century healthcare:  safety, effectiveness, timeliness, patient-centeredness, efficiency, and equity.  Achieving these goals in a complex system will require an holistic understanding of the needs and goals of all stakeholders and simultaneously optimizing the tradeoffs among them.

This, in turn, cannot be achieved without leveraging the tools that have been developed in other industries.  These have been well-known and are summarized in the table below.

While the bulk of the work and benefits related to these tools will lie at the organization level, such techniques can be applied directly to healthcare systems, beginning at the environmental level and working back left down to the patient, as indicated by the check marks in the table.

A few examples of specific challenges that can be addressed through systems analysis and planning solutions include the following:

1 – Optimal allocation of funding

2 – Improving patient flow through rooms and other resources

3 – Capacity management and planning

4 – Staff scheduling

5 – Forecasting, distributing and balancing inventories, both medical/surgical and pharmaceuticals

6 – Evaluation of blood supply networks

Expanding on example #5 (above), supply chain management solutions help forecast demand for services and supplies and plan to meet the demand with people, equipment and inventory.  Longer term mismatches can be minimized through sales and operations planning, while short-term challenges are addressed with inventory rebalancing, and scheduling.

Systems analysis techniques have been developed over many years and are based on a large body of knowledge.  These types of analytical approaches, while very powerful, require appropriate tools and expertise to apply them efficiently and effectively.  Many healthcare delivery organizations have invested in staff who have experience with some of these tools, including lean thinking in process design and six-sigma in supply chain management.  There are also instances where some of the techniques under “Optimizing Results” are being applied, as well as predictive modeling and artificial intelligence.  But, more remains to be done, even in the crucial, but less hyped, areas like inventory management.  Some healthcare providers may initially need to depend on resources external to their own organizations as they build their internal capabilities.

I leave you with a thought for the weekend – “Life is full of tradeoffs.  Choose wisely!”

The Time-to-Action Dilemma in Your Supply Chain

dreamstime_m_26639042If you can’t answer these 3 sets of questions in less than 10 minute
(and I suspect that you can’t), then your supply chain is not the lever it could be to
 drive more revenue with better margin and less working capital:
1) What are inventory turns by product category (e.g. finished goods, WIP, raw materials, ABC category, etc.)?  How are they trending?  Why?
2) What is the inventory coverageHow many days of future demand can you satisfy with the inventory you have on-hand right now?
3) Which sales orders are at risk and why?  How is this trending?  And, do you understand the drivers?

Global competition and the transition to a digital economy are collapsing your slack time between planning and execution at an accelerating rate.


You need to answer the questions that your traditional ERP and APS can’t from an intelligent source where data is always on and always current so your supply chain becomes a powerful lever for making your business more valuable.


You need to know the “What?” and the “Why? so you can determine what to do before it’s too late.  


Since supply chain decisions are all about managing interrelated goals and trade-offs, data may need to come from various ERP systems, OMS, APS, WMS, MES, and more, so unless you can consolidate and blend data from end-to-end at every level of granularity and along all dimensions, you will always be reinventing the wheel when it comes to finding and collecting the data for decision support.  It will always take too long.  It will always be too late.


You need diagnostic insights so that you can know not just what, but why.  And, once you know what is happening and why, you need to know what to do — your next best action, or, at least, viable options and their risks . . . and you need that information in context and “in the moment”.


In short, you need to detect opportunities and challenges in your execution and decision-making, diagnose the causes, and direct the next best action in a way that brings execution and decision-making together.


Some, and maybe even much, of detection, diagnosis and directing the next best action can be automated with algorithms and rules.  Where it can be, it should be.  But, you will need to monitor the set of opportunities that can be automated because they may change over time.


If you can’t detect, diagnose and direct in a way that covers your end-to-end value network in the time that you need it, then you need to explore how you can get there because this is at the heart of a digital supply chain.

As we approach the weekend, I’ll leave you with this thought to ponder:
Leadership comes from a commitment to something greater than yourself that motivates maximum contribution from yourself and those around you, whether that is leading, following, or just getting out of the way.”
Have a wonderful weekend!

The Value Network, Optimization & Intelligent Visibility

The supply chain is more properly designated a value network through which many supply chains can be traced.  Material, money and data pulse among links in the value network, following the path of least resistance, accelerated by digital technologies, including additive manufacturing, more secure IoT infrastructure, RPA, and, potentially, blockchain. 

If each node in the value network makes decisions in isolation, the total value in one or more supply chain paths becomes less than it could be.  

In the best of all possible worlds, each node would eliminate activities that do not add value to its own transformation process such that it can reap the highest possible margin, subject to maximizing and maintaining the total value proposition for a value network or at least a supply chain within a value network.  This is the best way to ensure long-term profitability, assuming a minimum level of parity in bargaining position among trading partners and in advantage among competitors.

Delivering insights to managers that allow them to react in relevant-time without compromising the value of the network (or a relevant portion of a network, since value networks interconnect to form an extended value web) remains a challenge.

The good news is that many analytical techniques and the mechanisms for delivering them in timely, distributed ways are becoming ubiquitous.  For example, optimization techniques and scenarios can provide insights into profitable ranges for decisions, marginal benefits of incremental resources, and robustness of plans, given uncertain inputs.

When these techniques are combined with intelligent visibility that allows you detect and diagnose anomalies in your supply chain, then everyone can make coordinated decisions as they execute.  

I will leave you with these words of irony from Dale Carnegie, “You make more friends by becoming interested in other people than by trying to interest other people in yourself.”

Thanks again for stopping by and have a wonderful weekend!

What are “Analytics”?

“Analytics” is one of those business buzz words formed by transforming an adjective into a noun. 

So forceful and habitual is such misuse of language that one might call it a compulsion among business analysts and writers.

The term “analytics” commonly refers to software tools that can be used to organize, report, and sometimes visualize data in attempt to lend meaning for decision-makers.  These capabilities have been advanced in recent years so that many types of graphical displays can be readily employed to expose data and try to make information from it.   “Analytics” has been used to refer to a very broad array of software applications.  Numerous industry analysts have attempted to segment these applications in various ways.  “Analytics” refers to so many kinds of applications that it is useful to establish some broad categories.

A simple, though imperfect, scheme such as the following may be the most useful where the potential value that can be achieved through each category increases from #1 through #4.

Reports – repetitively run displays of pre-aggregated and sorted information with limited or no user interactivity.

Dashboards – frequently updated displays of performance metrics which can be displayed graphically.  They are ideally tailored to the needs of a given role.  Dashboards support the measurement of performance, based on pre-aggregated data with some user selection and drill-down capability.  Hierarchies of metrics have been created that attempt to facilitate a correlation between responsibility and performance indicators.  The most common such model is the Supply Chain Operations Reference Model (SCOR Model) that was created and is maintained by the Supply Chain Council.

Data Analysis Tools – interactive software applications that enable data analysts to dynamically aggregate, sort, plot, and otherwise explore data, based on metadata.  Significant advancements have been made in recent years to dramatically expand the options for visualizing data and accelerating the speed at which these tools can generate results.

Decision Support/Management Science Tools – simulation, optimization, and other approaches to multi-criteria decisions which require the application of statistics and mathematical modeling and solving.

Let’s focus on Decision Support/Management Science Tools, the category with the most potential for adding value to strategic (high value) decision-making in a sustained fashion. 

So, then, if that is what analytics are, do they enable higher quality decisions in less time, and if so, to what extent are those better decisions in less time driving cash flow and value for their business?  These are critically important questions because improved, integrated decision-making that is based in facts and adjusted for risk drives the bottom line.

Execution is good, but operational execution under a poor decision set is like going fast in the wrong direction.  It is bad, but perhaps not immediately fatal.  Poor decisions will put a business under very quickly.

Enabling higher quality decisions in less time depends on the decision-maker, but it can also depend on the tools employed and the skills of the analysts using the tools. 

The main activities in using these tools involve the following:

  1. Sifting through the oceans of data that exist in today’s corporate information systems
  2. Synthesizing the relevant data into information (a thoughtful data model within an analytical application is helpful, but not sufficient)
  3. Presenting it in such a way so that a responsible manager can combine it with experience and quickly know how to make a better decision

Obtaining a valuable result requires careful preparation and skilled interaction, asking the right questions initially and throughout the above activities.

Some of the questions that need to be asked before the data can be synthesized into information in a useful way are represented by those given below:

  1. What is the business goal?
  2. What decisions are required to reach the goal?
  3. What are the upper and lower bounds of each decision? (Which outcomes are unlivable?)
  4. How sensitive is one decision to the outcome of other, interdependent decisions?
  5. What risks are associated with a given decision outcome?
  6. Will a given decision today impact the options for the same decision tomorrow?
  7. What assumptions are implicitly driven by insufficient data?
  8. How reliable is the data upon which the decision is based?
    • Is it accurate?
    • How much of the data has been driven by one-time events that are not repeatable?
    • What data is missing?
    • Is the data at the right level of detail?
    • How might the real environment in which the decision is to be implemented be different from that implied by the data and model (i.e. an abstraction of reality)?
    • How can the differences between reality and its abstraction be reconciled so that the results of the model are useful?

Ask the right questions.

Know the relative importance of each.

Understand which techniques to apply in order to prioritize, analyze and synthesize the data into useful information that enables faster, better decisions.

We often think of change when a new calendar year rolls around.  Since this is my first post of the new year, I”ll leave you with one of my favorite quotes on change.  Leo Tolstoy:  “Everybody thinks of changing humanity, and nobody thinks of changing himself.”

Have a wonderful weekend!

Resilience Versus Agility

Just a short thought as we move into this weekend . . .

Simple definitions of resiliency and agility as they relate to your value network might be as follows:

Resiliency:  The quality of your decisions and plans when their value is not significantly degraded by variability in demand and/or changes in your competitive and economic environment.

Agility:  The ability to adjust your plans and execution for maximum value by responding to the marketplace based on variability in demand and/or changes in your competitive and economic environment.

You can take an analytical approach that will make your plans and decisions resilient and also give you insights into what you need to do in order to be agile.

You need to know the appropriate analytical techniques and how to use them for these ends.

A capable and usable analytical platform can mean the difference between knowing what you should do and actually getting it done.

For example, scenario-based analysis is invaluable for understanding agility, while range-based optimization is crucial for resiliency.

Do you know how to apply these techniques?

Do you have the tools to do it continuously?

Can you create user and manager ready applications to support resiliency and agility?

Finally, I leave you with this thought from Curtis Jones:  “Life is our capital and we spend it every day.  The question is, what are we getting in return?”

Thanks for stopping by.  Have a wonderful weekend!

Analytics vs. Humalytics

I have a background in operations research and analysis so, as you might expect, I am biased toward optimization and other types of analytical models for supply chain planning and operational decision-making.   Of course, you know the obvious and running challenges that users of these models face:

  1. The data inputs for such a model are never free of defects
  2. The data model that serves as the basis for a decision model is always deficient as a representation of reality
  3. As soon a model is run, the constantly evolving reality increasingly deviates from the basis of the model

Still, models and tools that help decision-makers integrate many complex, interrelated trade-offs can enable significantly better decisions.

But, what if we could outperform very large complex periodic decision models through a sort of “existential optimization” or as a former colleague of mine put it, “humalytics“?

Here is the question expressed more fully:

If decision-makers within procurement, manufacturing and distribution and sales had the “right time” information about tradeoffs and how their individual contributions were affecting their performance and that of the enterprise, could they collectively outperform a comprehensive optimization/decision model that is run periodically (e.g. monthly/quarterly) in the same way that market-based economies easily outperform centrally planned economies?

I would call this approach “humalytics” (borrowed from a former colleague, Russell Halper, but please don’t blame him for the content of this post!), leveraging a network of the most powerful analytical engines – the human brain, empowered with quantified analytical inputs that are updated in “real-time” or as close to that as required.  In this way, the manager can combine these analytics with factors that might not be included in a decision model from their experience and knowledge of the business to constantly make the best decisions with regard to replenishment and fulfillment through “humalytics”, resulting in constantly increasing value of the organization.

In other words, decision-maker would have instant, always-on access to both performance metrics and the tradeoffs that affect them.  For example, a customer service manager might see a useful visualization of actual total cost of fulfillment (cost of inventory and cost of disservice) and the key drivers such as actual fill rates and inventory turns as they are happening, summarized in the most meaningful way, so that the responsible human can make the most informed “humalytical” decisions.

Up until now, the answer has been negative for at least two reasons:

A. Established corporate norms and culture in which middle management (and maybe sometimes even senior management) strive diligently for the status quo.

B. Lack of timely and complete information and analytics that would enable decision-makers to act as responsible, accountable agents within an organization, the same way that entrepreneurs act within a market economy.

With your indulgence, I’m going to deal with these in reverse order.

A few software companies have been hacking away at obstacle B.”, and we may be approaching a tipping point where the challenge of accurate, transparent information and relevant, timely analytics can be delivered in near real-time, even on mobile devices, allowing the human decision-makers to constantly adjust their actions to deliver continuously improved performance.  This is what I am calling “humalytics”.

But the network of human decision-makers with descriptive metrics is not enough.  Critical insights into tradeoffs and metrics come through analytical models, leveraging capabilities like machine learning, optimization, RPA, maybe in the form of “mini-apps” models that operate on a curated supra set of data that is always on and always current.  So, at least two things are necessary:

1. Faster optimization and other analytical modeling techniques from which the essential information is delivered in “right time” to each decision-maker

2. An empowered network of (human) decision-makers who understand the quantitative analytics that are delivered to them and who have a solid understanding of the business and their part in it

In current robotics research there is a vast body of work on algorithms and control methods for groups of decentralized cooperating robots, called a swarm or collective. (  Maybe, we don’t need swarm of robots, after all.  Maybe we just need empowered decision-makers who not only engage in Sales and Operations Planning (or, if you will, Integrated Business Planning), but integrated business thinking and acting on an hourly (or right time) basis.

What think you?

If you think this might make sense for your business, or if you are working on implementing this approach, I’d be very interested to learn your perspective and how you are moving forward.

I leave you with these words from Leo Tolstoy, “There is no greatness where there is no simplicity, goodness, and truth.”

Have a wonderful weekend!

Part 3 – Finding the ROI for an Investment in an Analytical SCM Solution published a piece of mine on this topic a few years ago, but the ideas are important, so I am recapitulating the them here.  The first post in this series introduced the topic of overcoming the challenges to calculating the return for an investment in an analytical supply chain software application.  This post deals with the second of four challenges.

Part 3 – Second Challenge – Business analysis skills are lacking – “We are looking for the vendor to tell us!”

Can the Vendor Help?

After all, since the software vendor is proposing the solution, shouldn’t the vendor know how it will affect your company?  The vendor probably does have some useful information about whether the decision to purchase will be of some benefit.  They will be able to tell you in general what business symptoms can be affected.  They may even have survey data that show how other companies in your industry, or at least in other industries, have reported benefits.  They should have anecdotal evidence of how some existing customer plans to benefit or has benefited from investing in their approach or solution.

There are a couple of problems with the vendor’s input. First, the vendor cannot be objective. The vendor’s business is on the line.  It is probably a fierce competitor and its representatives may be under pressure to make this deal happen.  Second, directional information, surveys, and anecdotes may or may not be reasonable predictors of how your company will fare.

The current state of your business processes and how they are performing is pivotal to the potential return.

What Should You Do?

This reaction “We are looking for the vendor to tell us!”, is similar to the first challenge and reaction,“We need to know now!”  This second challenge is less driven by time than by the perception that the skill to perform the cause and effect analysis, data gathering, and statistical analysis does not reside within your company.  But, it is important for you to be able to understand, monitor and control the process, even if you use an outside consultant. Following these steps will help you do just that:

1. Identify and quantify undesirable symptoms.

2. Perform cause and effect analysis to find possible root causes.

3. Gather data by reason code (in order to prioritize root causes).

4. Quantify and analyze root causes (Pareto analysis).

5. Estimate the positive impact of your investment decision (e.g. a new supply chain management tool) on your root causes.

6. Extrapolate this to the positive impact on the undesirable symptoms.

7. Perform sensitivity analysis around your estimate in step 5 by varying the estimate and repeating step 6. This will give you a sense for the range of possible outcomes that is reasonable.

Your success at steps 1 through 7 will be most likely if you follow two additional guidelines:

1. “Time box” the analysis to a minimum of 2 weeks and a maximum of 30 days. These time frames are really only a guide to represent the order of magnitude for the minimum and maximum time frames.

2. Assign a primary internal resource for each area of analysis you undertake.

Once again, I’m grateful that you took a moment to read Supply Chain Action.  As a final thought, I remind you of the familiar words to ponder, “Luck is that point in life where opportunity and preparation meet.”


%d bloggers like this: