The Time-to-Action Dilemma in Your Supply Chain



dreamstime_m_26639042If you can’t answer these 3 sets of questions in less than 10 minute
s
(and I suspect that you can’t), then your supply chain is not the lever it could be to
 drive more revenue with better margin and less working capital:
1) What are inventory turns by product category (e.g. finished goods, WIP, raw materials, ABC category, etc.)?  How are they trending?  Why?
2) What is the inventory coverageHow many days of future demand can you satisfy with the inventory you have on-hand right now?
3) Which sales orders are at risk and why?  How is this trending?  And, do you understand the drivers?

Global competition and the transition to a digital economy are collapsing your slack time between planning and execution at an accelerating rate.

 

You need to answer the questions that your traditional ERP and APS can’t from an intelligent source where data is always on and always current so your supply chain becomes a powerful lever for making your business more valuable.

 

You need to know the “What?” and the “Why? so you can determine what to do before it’s too late.  

 

Since supply chain decisions are all about managing interrelated goals and trade-offs, data may need to come from various ERP systems, OMS, APS, WMS, MES, and more, so unless you can consolidate and blend data from end-to-end at every level of granularity and along all dimensions, you will always be reinventing the wheel when it comes to finding and collecting the data for decision support.  It will always take too long.  It will always be too late.

 

You need diagnostic insights so that you can know not just what, but why.  And, once you know what is happening and why, you need to know what to do — your next best action, or, at least, viable options and their risks . . . and you need that information in context and “in the moment”.

 

In short, you need to detect opportunities and challenges in your execution and decision-making, diagnose the causes, and direct the next best action in a way that brings execution and decision-making together.

 

Some, and maybe even much, of detection, diagnosis and directing the next best action can be automated with algorithms and rules.  Where it can be, it should be.  But, you will need to monitor the set of opportunities that can be automated because they may change over time.

 

If you can’t detect, diagnose and direct in a way that covers your end-to-end value network in the time that you need it, then you need to explore how you can get there because this is at the heart of a digital supply chain.

As we approach the weekend, I’ll leave you with this thought to ponder:
Leadership comes from a commitment to something greater than yourself that motivates maximum contribution from yourself and those around you, whether that is leading, following, or just getting out of the way.”
Have a wonderful weekend!

How Can I Improve My Forecast Accuracy?

Imagine

Imagine that Amanda (a completely imaginary person) is a demand planner at Kool Komfort Foods (a completely imaginary company, also branded as K2), a nationwide producer of healthy comfort foods.  She got her bachelor’s in Mechanical Engineering about five years ago.  After a short stint in manufacturing process engineering in another industry, she got interested in the business side of things and moved into supply chain planning, starting in demand planning.  After taking a couple of on-line courses and getting an APICS certification, she seized on the opportunity to be a junior demand planner at K2.  Through her affinity for math and her attention to detail, Amanda earned a couple of promotions and is now a senior demand planner.  At present, she currently manages a couple of product lines, but has her sights set on becoming a demand planning manager and mentoring others.

Amanda has been using some of the common metrics for forecast accuracy, including MAPE (mean absolute percentage error) and weighted MAPE, but the statistical forecast doesn’t seem to improve and the qualitative inputs from marketing and sales are hit or miss.  Her colleague, Jamison, uses the standard deviation of forecast error to plan for safety stock, but there are still a lot of inventory shortages and overages.  

Amanda has heard her VP, Dmitry, present to other department heads how good the forecast is, but when he does that, he uses aggregate measures and struggles when he is asked to explain why order fill rate is not improving, if the forecast is so good.

Amanda wonders what is preventing her from getting better results at the product/DC level, where it counts.  She would love to have it at the product/customer or product/store level, but she knows that she will need better results at the product/DC level before she can do that.  She is running out of explanations for her boss and the supply planning team.  She has been using some basic forecasting techniques that she has programmed into Excel, like single and double exponential smoothing as well as moving average, and even linear regression.  She is sure the math is correct, but the results have been disappointing. 

Amanda’s company just bought a commercial forecasting package.  She was hoping that would help.  It is supposed to run a bunch of models and select the best one and optimize the parameters, but so far, the simpler models perform the best and are no better – and sometimes worse – than her Excel spreadsheet.

Amanda has been seeing a lot of posts on LinkedIn about “AI”.  She has been musing to herself about whether there is some magic bullet in that space that might deliver better results.  But, she hasn’t had time to learn much about the details of that kind of modeling.  In fact, she finds it all a bit overwhelming, with all of the hype around the topic.

And, anyway, forecasts will always be wrong, they will always change, and the demand planner will always take the blame.  Investments in forecasting will inevitably reach diminishing returns, but for every improvement in forecast accuracy, there are cascading benefits through the supply chain and improvements in customer service.  So, what can Amanda and her company do to make sure they are making the most of the opportunity to anticipate market requirements without overinvesting and losing focus on the crucial importance of developing an ever more responsive value network to meet constantly changing customer requirements?

Unfortunately, there really is no “silver bullet” for forecasting, no matter how many hyperbolic adjectives are used by a software firm in their pitch.  That is not to say that a software package can’t be useful, but you need to really understand what you need and why before you go shopping.  

Demand planning consists of both quantitative and a qualitative analysis.  Since the quantitative input can be formulated and automated (not that it’s easy or quick), it can be used for calculating and updating a probabilistic range for anticipated demand over time. 

A good quantitative forecast requires hard work and skilled analysis.  Creating the best possible quantitative forecast (without reaching diminishing returns) will provide a better foundation for, and even improve, qualitative input from marketing, sales, and others.

Profiling

One of the first things you need to do is understand the behavior of the data.  This requires profiling the demand by product and location (either shipping plant/DC or customer location – let’s call that a SKU for ease of reference) with respect to volume and variability in order to determine the appropriate modeling approach.  For example, a basic approach is as follows: High volume, low variability SKU’s will be easy to mathematically forecast and may be suited for lean replenishment techniques.  

  • Low volume, low variability items may be best suited for simple reorder point replenishment.
  • High volume, high variability SKU’s will be difficult to forecast and may require a sophisticated approach to safety stock planning.
  • Low volume, high variability SKU’s may require a thoughtful postponement approach, resulting in an assemble or make-to-order process.  
  • A more sophisticated approach would involve the use of a machine learning for classification that might find clusters of demand along more dimensions.

Profiling analysis can be complemented nicely by a Quantitative Reasonability Range Check (see below), which should be an on-going part of your forecasting process.

Once you have profiled the data, you can start to develop the quantitative forecast, but you will need to consider the questions:

  1. What is the appropriate level of aggregation for forecasting?
  2. What forecast lag should I use?
  3. How frequently should I forecast?
  4. What are the appropriate quantitative forecast models?
  5. How should I initialize the settings for model parameters?
  6. How should I consume the forecast?
  7. How will I compensate for demand that I couldn’t capture?
  8. What metrics should I use to measure forecast accuracy?

Let’s consider each of these questions, in turn.

A. Level of Aggregation

The point of this analysis is to determine which of the following approaches will provide you with the best results:

  • Forecasting at the lowest level and then aggregating up
  • Forecasting at a high level and just disaggregating down
  • Forecasting at a mid-level and aggregating up and, also, disaggregating down

B. Correct Lag

If you forecast today for the demand you expect tomorrow, you should be pretty accurate because you will have the most information possible, prior to actually receiving orders.  The problem with this is obvious.  You can’t to react to this forecast (which will change each day up until you start taking orders for the period you are forecasting) by redistributing or manufacturing product because that takes some time.

Since you cannot procure raw materials, manufacture, pack, or distribute instantly, the “lead time” for these activities needs to be taken into account.  So, you need to have a forecast lag.  For example, if you need a month to respond to a change in demand, then, you would need to forecast this month for next month.  You can continue to forecast next month’s demand as you move through this month, but it’s unlikely you will be able to react, so when you measure forecast accuracy, you need to measure it at the appropriate lag.

C. Frequency

Should you generate a new forecast every day? Every week?  Or, just once a month?  This largely depends on when you can get meaningful updates to your forecast inputs such as sales orders, shipment history, or updates to industry and any syndicated or customer data (whether leading or trailing indicators) that are used in your quantitative forecast.

D. Appropriate Forecasting Model(s)

So, what mathematical model should you use?  This is a key question, but as you can see, certainly not the only one.

The mathematical approach can depend on many factors, including, but not limited to, the following:

  • Profiling (discussed above)
  • Available and meaningful trailing and leading indicators
  • Amount of history needed for the model vs. history that’s still relevant
  • Forecasting a distribution of demand vs. forecasting the actual distribution 
  • Explainability vs. accuracy of the model
  • The appearance of accuracy vs. useful accuracy (overfitting a model to the past)
  • Treatment of qualitative data (e.g., geography, holiday weekends, home football game, etc.)

A skilled data scientist can be a huge help.  A plethora of techniques is available, but a powerful machine learning (or other) technique can be like a sharp power tool.  You need to know what you’re doing and how to avoid hurting yourself.

E. Initializing the Steady State Settings for Parameters

Failure to properly initialize the parameters of a statistical model can cause it to underachieve.  In the case of Holt-Winters 3 parameter smoothing, for example, the modeler needs to have control over how much history is used for initializing the parameters.  If too little history is used, then forecasts will likely be very unreliable. 

When it comes to machine learning, there are two kinds of parameters – hyperparameters and model parameters.  Training can optimize the model parameters, but knowledge, experience and care are required to select techniques that are likely to help and to set the hyper parameters for running models that will give you good results.

F. Forecast Consumption Rules

There are a few things to consider when you consume the forecast with orders.  For example, you might want to bring forward previously unfulfilled forecasts (or underconsumption) from the previous period(s), or there may be a business reason to simply treat consumption in each week or month in isolation.

You may want to calculate the forecast consumption more frequently than you generate a new forecast.

G. Compensating for Demand You Couldn’t Capture

This is a particular challenge in the retail and CPG industries.  In CPG, many orders from retail customers are placed and fulfilled on a “fill or kill” basis.  The CPG firm fulfills what it can with the inventory on hand and then cancels or “kills” the rest of the order.

In retail, a consumer may simply go to a competitor or order online if the slot for the product on the shelf in a given store is empty.

In either case, sales or shipment history will under-represent true demand for that period.  If you don’t accurately compensate for this, your history will likely drive your forecast model to under-forecast.

H. Metrics and Measurement

There are many measures of forecast accuracy that can be used.  A couple of key questions to answer include the following:

  1. Who is the audience and what is their interest?  Consider the sales organization which is interested in an aggregate measure of sales against their sales target, perhaps by sales group or geography.  On the other hand, customer service doesn’t really happen in aggregate.  If you want to have better customer service, you need to look at forecast accuracy at the SKU level.
  2. Are you measuring forecast error based on an assumed normal distribution that you have defined by projecting a mean and standard deviation?  Or, have you been able to use the actual distribution of forecast error, perhaps created through bootstrapping? 

Remember that you will need to measure forecast error at the correct lag.

Another thing you may need to keep in mind is that not everyone has been trained to understand forecast error and its interrelationship to inventory, safety stock, and fill rate.  You may have a bit of education to do from time to time, even for executives.

Price & Forecast

In most cases, demand is elastic with respect to price.  In other words, there is a relationship between what you charge for something and the demand for it.  This is why consumer packaged goods companies run promotions and fund promotions with retailers, and also, why retailers run their own promotions.  The goal is to grow sales without losing money and/or gain market share (possibly, incurring a short-term loss).  The overall goal is to increase gross margin in a given time period.  Many CPG companies make competing products – think of shampoo or beverages, or even automobiles or car batteries.  And, of course, retailers sell products from their CPG suppliers that compete for shelf space and share of wallet.  Many retailers even sell their own private label goods.  The trick is how to price competing products such that you gain sales and margin over the set of products.  

Just as in forecasting demand, there are both quantitative and qualitative approaches to optimizing pricing decisions which, then, in turn, need to be incorporated into the demand forecast.  The quantitative approach has two components:

  1. Using ML techniques to predict prie elasticity, considering history, future special events (home football game, holiday weekend, football team in playoffs, etc.), minimum and maximum demand, and perhaps other features.
  2. Optimizing the promotional offers so that margin is maximized.  For this, a mathematical optimization model may be best so that the total investment in promotional discount and allocations of that investment are respected, limits on cannibalization are enforced, and upper limits on demand are considered.

The Quantitative Reasonability Range Check

There is a process that should be part of both your demand planning and your sales and operations planning.  The concept is simple – how do you find the critical few forecasts that require attention, so that planner brainpower is expended on making a difference and not hunting for a place to make a difference?  A Quantitative Forecast Reasonability Range Check (or maybe QRC, for short) accomplishes this perfectly.  If the historical data is not very dense, then a “reasonability range” may need to be calculated through “bootstrapping”, a process of randomly sampling the history to create a more robust distribution.   Once you have this distribution, you can assign a probability to a future forecast and leverage that probability for safety stock planning as well.

At a minimum, a QRC must consider the following components:

  • Every level and combination of the product and geographical hierarchies
  • A quantitative forecast
  • An asymmetric prediction interval over time
  • Metrics for measuring how well a point forecast fits within the prediction interval
  • Tabular and graphical displays that are interactive, intuitive, always available, and current

If you are going to attempt to establish a QRC, then I would suggest five best practices:

Eliminate duplication.  When designing a QRC process (and supporting tools), it is instructive to consider the principles of Occam’s razor as a guide:

– The principle of plurality – Plurality should not be used without necessity

– The principle of parsimony – It is pointless to do with more what can be done with less

These two principles of Occam’s razor are useful because the goal is simply to flag unreasonable forecasts that do not pass a QRC, so that planners can focus their energy on asking critical questions only about those cases.

Minimize human time and effort by automating the math.  Leverage automation and, potentially, even cloud computing power, to deliver results that are self-explanatory and always available, providing an immediately understood context that identifies invalid forecasts. 

Eliminate inconsistent judgments.  By following #1 and #2 above, you avoid inconsistent judgments that vary from planner to planner, from product family to product family, or from region to region.

Reflect reality.  Calculations of upper and lower bounds of the prediction interval should reflect seasonality and cyclical demand in addition to month-to-month variations.  A crucial aspect of respecting reality involves calculating the reasonability range for future demand from what actually happened in the past so that you do not force assumptions of normality onto the prediction interval (this is why bootstrapping can be very helpful).  Among other things, this will allow you to predict the likelihood of over- and under-shipment.

Illustrate business performance, not just forecasting performance with prediction intervals.  The range should be applied, not only from time-period to time-period, but also cumulatively across periods such as months or quarters in the fiscal year.

Summary

Demand planning is both quantitative and qualitative.  In this paper, we have touched on the high points of the best practices for building a good quantitative forecasting foundation for your demand planning process.  In our imaginary case, Amanda still has some work to do, some of which lies outside of her expertise.  She will need to articulate the case for making an investment to improve the quantitative forecast and building a better foundation for qualitative input and a consensus demand planning process.  A relatively small improvement in forecast accuracy can have significant positive bottom and top-line impact.  

Amanda needs to convince her management to invest in a consulting service that will deliver the math, without the hype, and within the context of experience, so that she can answer the key quantitative questions every demand planner faces:

  • What is the profile of my demand data?
  • What is the appropriate level of aggregation for forecasting?
  • What forecast lag should I use?
  • How frequently should I forecast?
  • What are the appropriate quantitative forecast models?
  • How should I initialize the settings for model parameters?
  • How should I consume the forecast?
  • How will I compensate for demand that I couldn’t capture?
  • What metrics should I use to measure forecast accuracy?

Who Is Spending Your Cash?

“Cash is king,” we hear.  I have seen this in the core values of major, multi-national corporations.  If you travel for your company, you likely face restrictions on the amount and/or cost of travel which you can book without very senior level approval.  I know of one company with revenues of about $15 billion in which the CFO has mandated approval of any air fare over $500, even for employees who routinely must book and re-book travel on short notice due to the nature of their duties.  I do not debate the wisdom of such policies.  I only use them to illustrate how carefully the expenditure of cash is scrutinized in many cases.  Capital expenditures require even greater examination and multiple approvals, perhaps even from the board of directors.  Despite these procedures, I pose this question:  “Do you really know who his spending your cash and how they are doing it?”

Consider where most of the cash is spent and who spends it.  In most manufacturing firms, the largest single expenditure of cash is for the acquisition of raw materials and their transformation and distribution, namely, the cost of goods sold.  What is not sold remains on the balance sheet as inventory.  A manufacturer with 40% gross margin is doing very well in most industries, although there are notable exceptions in pharmaceuticals and a few other manufacturing industries.

A 40% gross margin would mean that 60% of the cash inflow from sales is spent on inventory – inventory that is either sold or stored.  In fact, manufactured product (or at least the raw materials, components or intermediates/work-in-process) in every manufacturing operation is stored at some point before it is shipped to a customer.  That is why inventory turns or days in inventory (both relating inventory to sales through the cost of goods sold) are the most appropriate kinds of metrics for inventory rather than the absolute amount.

So, given the relative proportion of cash flow in the majority of manufacturing firms that is spent on inventory of one kind or another, compared to, say, the proportion of cash flow spent on travel, one might assume that the level of scrutiny and approval required for spending on inventory would be extraordinary and performed at the most senior level of the firm.  Is that true in your company?  Of course not.  Manufacturing and distribution operations would be paralyzed, and servicing customers effectively would be precluded by such a bureaucratic approach.

Supply chain planners or buyer/planners are people who must determine how much should be procured, when, and where.  Purchasing or sourcing professionals, whose mission is to make sure that the purchase price is minimized, support the planning function, but purchase orders are issued by buyer/planners.

Even if “buying” is separate from “planning”, it is the planner who decides how much is needed when and where.

Planners do not rank among the highest paid employees, yet they are pulling the lever to spend the majority of the company’s cash flow.

Most planners today have access to advanced planning and scheduling (APS) tools which embed material requirements planning (MRP – I know this should be “little mrp”, as opposed to “big MRP” for manufacturing requirements planning, but allow me this convention here for visual ease) and distribution requirements planning (DRP) calculations to aid them in determining how much to purchase.  These tools are very helpful.  They are particularly helpful if the forecast is exactly right, if forecast error is always normally distributed, if stated transit lead times are always reality, if yields are constant, if service from one internal manufacturing or distribution point to another is always constant and known.  However, almost none of these conditions are ever true, and they are never true all at the same time.

So, not only do planners have to ultimately determine what to move, make and buy for every item in the bill of material (or formula/recipe) at every location in every future time period in the planning horizon, they must do so in an environment with many unknown inputs.

(At this point, I will include a plug for recruiting, training and retaining the very best planners – not vp’s of planning or directors of planning, but planners themselves since they are likely spending most of your cash!)

This problem is called multi-echelon, inventory optimization (MEIO)  MEIO is fast becoming a best practice requirement.  MEIO optimizes the answer to the very challenging problem of how much extra inventory a planner should plan to have at each location, for every item, at every level, given the many other unknown factors as well.  Put differently, “What is the inventory safety stock level that should be targeted for every item at every location, such that the cost of holding inventory for achieving a given service level for the end customer is minimized.”  This question must be answered across all nodes while considering all of the unknown factors mentioned above.

When solved, the result is a lower required buffer inventory than could be planned with just MRP or APS in order to achieve an optimal service level.  That means more available cash and more revenue and profits.

Solving the MEIO problem remains a massive challenge for which many planners still do not have sufficient tools at their disposal.  However, algorithms have been developed and can be implemented through commercially viable software.  It’s also increasingly possible to build your own on an advanced analytics platform.  As MEIO continues to be adopted, more planners can go about their normal planning process of determining what to move, make and buy, but with a much better starting point, namely the amount of inventory buffer required at each item, location.  This buffer, or safety stock, already a standard row in a supply planner’s gross-to-net calculation in his or her advanced planning system, allows planners to perform their work without disruption while achieving significantly better results for the cash management of their firm – when populated through MEIO.

Questions:

1)      How do your planners account for the unknown factors in determining how much cash to spend on which inventory in which locations and when?

2)      Are you thinking about evaluating MEIO?  If not, why not?

3)      Can you afford not to pay more attention to where the majority of your cash flow is going?

A Handy List of Keys to Careful, Comprehensive Inventory Management

I’m sure there are a lot of ways to summarize a careful, comprehensive approach to inventory management.  You may even be able to add to this list below or create your own, but this may be one handy way to think about it.

It might seem a little “corny”, but as a memory aid, I call this approach 5A6σ. 

  1. Anticipate market requirements
  2. Account for your actions
  3. Accurately calculate safety stock
  4. Access information and use it instead of inventory
  5. Accelerate by continuously reducing lead times and lot sizes
  6. Leverage the techniques of process control to reduce variation

___________________________________________________

Anticipate – anticipate market requirements

The more you are able to accurately anticipate the demand from your end customer in the marketplace, the more you will be able to move, make, buy, and store the inventory that will sell quickly.  This may seem like a self-evident axiom, but this is not easy, and the benefits of incrementally better anticipation go directly into additional revenue as well as more efficient inventory and use of cash.

Large bodies of knowledge have been built around this subject from rigorous quantitative models for forecasting to methodologies for collaborative forecasting, both within an organization and across organizations.  The point of diminishing returns can be reached fairly quickly, but if you are not there yet, it may be your most significant point of leverage for improved supply chain performance.

Account – for your action

This is perhaps the simplest, but yet most difficult aspect of careful, comprehensive inventory management. 

When different functions of the business such as sales and manufacturing do not have harmonized goals, then inventory efficiency will suffer.  For example, if manufacturing is rewarded only for efficiency and overhead absorption, while sales is only rewarded for volume or even total margin dollars, and purchasing is rewarded for lowering per unit costs, then inventory is left to someone who does not have the ability to fully influence it. 

This is one of the benefits of a functional Sales and Operations Planning or Integrated Business Planning process that is supported by shared and consistent metrics across business functions.

Accurately – calculate safety stock

We cannot know for certain what demand will be tomorrow.  Even organizations dedicated to consumption-based replenishment of “true demand” cannot know exactly how much will be required of which products, at which locations, and at which times.

Make-to-order businesses have an easier time of this, but, even then, orders can change and often do.

So, demand varies, but so does the lead time to meet the demand which is affected by several factors:

  • Variation in the ability of manufacturing to respond in a timely and accurate fashion, driven by batch sizes and setup times, variation in the conversion process, and by other factors
  • Variation in the transportation operation (caused by traffic volume or accidents, road construction, weather, illness, and any number of other factors)
  • The capability of warehousing to know what is exactly where and pick, pack and ship it in a timely and efficient way.

For these reasons, you must have more inventory than you will actually be needed if everything goes perfectly.

Any other approach implies the intentional loss of revenue.  Done poorly, this can put you out of business.

Fortunately, there are techniques for doing a good job of this through optimization.  Do a bit of research to identify the technique that fits the structure of your operations (single-tier distribution or multi-tier manufacturing, for instance) and get the analytical and software support (if necessary) to embed that technique into the normal planning process.  This can often yield a step-function improvement in both reducing the necessary investment of working capital in inventory as well as in improving customer service.

Combined with the set of decisions around supply chain flexibility with which inventory decisions are interdependent, decisions about inventory are essential to increasing the value of your enterprise.

Access – information and use it instead of inventory

Data is now more available than ever.  However, there are two key challenges in making use of it.  The first is the acquisition of data.  For everything, from the location of a particular serial number of an item to syndicated data of leading indicators of demand, there exist methods, technologies and markets for the acquisition of data. 

Once you have the data, you must then organize, summarize, and analyze the data into information in such a way that better decisions can be made in less time.  As an example, knowing more about your customer’s demand sooner may help you operate your operations more effectively and efficiently to meet that customer’s demand when it actually comes to fruition.  Knowing the exact location of inventory as it transits and is transformed through your value network can, in some cases, help you respond more quickly to the changes in the market without adding more inventory.

Accelerate – continuously reduce lead-times and lot sizes

Whether you consider yourself a “lean” operation or a “six-sigma” shop or both (“lean six-sigma”), the reality remains that all manufacturers are obliged to constantly search for ways to reduce lead times and run times (batch or lot sizes) during which variation can occur.  This sometimes requires analysis of fixed versus variable cost such as the fixed cost to modify equipment so that changeovers can be completed more quickly against the variable cost of carrying inventory for a longer time, or perhaps indefinitely.

leverage techniques of process control reduce variation

In conjunction with continuously reducing lead-times and lot sizes, a careful and comprehensive approach to inventory management requires that you make use of what we have known for decades about statistics to identify variation (the reason for safety stock) and its sources, so that you can work continuously to reduce it.  As with the “Anticipate” process, you will reach diminishing returns, but your progress may not be linear or only incremental, and it will be difficult to anticipate when a step-function improvement will occur, so this is an ongoing responsibility.

So, there you have it, the “5A6σ”approach for inventory management, if you will. 

You need to do all of these in order to manage your inventory carefully and comprehensively. 

I suspect you are already careful and comprehensive in your approach to cash management.  And, since you spend so much of your cash on inventory, you need to take an equally intelligent approach to managing inventory.

And here’s a final thought to ponder from Henry David Thoreau as we begin to approach the weekend , “If a man does not keep pace with his companions, perhaps it is because he hears a different drummer.”

“Moneyball” and Your Business

 

It’s MLB playoff time, and my team (the Tribe) is there, again.  (Pregnant pause to enjoy the moment.)

A while back, the film “Moneyball” showed us how the Oakland A’s built a super-competitive sports franchise on analytics, essentially “competing on analytics”, within relevant business parameters of a major league baseball franchise.  The “Moneyball” saga and other examples of premier organizations competing on analytics were featured in the January 2006 Harvard Business Review article, “Competing on Analytics” (reprint R0601H) by Thomas Davenport, who also authored the book by the same name.

The noted German doctor, pathologist, biologist, and politician, Rudolph Ludwig Karl Virchow called the task of science “to stake out the limits of the knowable.”  We might paraphrase Rudolph Virchow and say that the task of analytics is to enable you to stake out everything that you can possibly know from your data.

So, what do these thoughts by Davenport and Virchow have in common?

In your business, you strive to make the highest quality decisions today about how to run your business tomorrow with the uncertainty that tomorrow brings.  That means you have to know everything you possibly can know today.  In an effort to do this, many companies have invested, or are considering an investment, in supply chain intelligence or various analytics software packages.  Yet, many companies who have made huge investments know only a fraction of what they should know from their ERP and other systems.  Their executives seem anxious to explore “predictive” analytics or “AI”, because it sounds good.  But, investing in software tools without understanding what you need to do and how is akin to attempting surgery with wide assortment of specialized tools, but without having gone to medical school.

Are you competing on analytics?

Are you making use of all of the data available to support better decisions in less time?

Can you instantly see what’s inhibiting your revenue, margin and working capital goals across the entire business in a context?

Do you leverage analytics in the “cloud” for computing at scale and information that is always on and always current?

I appreciate everyone who stops by for a quick read.  I hope you found this both helpful and thought-provoking.

As we enter this weekend, I leave you with one more thought that relates to “business intelligence” — this time, attributed to Socrates:

“The wisest man is he who knows his own ignorance.”

Do you know what you don’t know?  Do I?

Have a wonderful weekend!

Does Your Demand Planning Process Include a “Quantitative Reasonability Range Check”?

There is a process that should be part of both your demand planning and your sales and operations planning.  The concept is simple – how do you find the critical few forecasts that require attention, so that planner brainpower is expended on making a difference and not hunting for a place to make a difference?  I’ll call it a Quantitative Forecast Reasonability Range Check (or maybe QRC, for short).  It may be similar in some ways to analyzing “forecastability” or a “demand curve analysis”, but different in at least one important aspect – the “reasonability range” is calculated through bootstrapping (technically, you would be bootstrapping a confidence interval, but please allow me the liberty of a less technical name – “reasonability range”).  A QRC can be applied across industries, but it’s particularly relevant in consumer products.

At a minimum, QRC must consider the following components:

  1. Every level and combination of the product and geographical hierarchies
  2. A quality quantitative forecast
  3. A prediction interval over time
  4. Metrics for measuring how well a point forecast fits within the prediction interval
  5. Tabular and graphical displays that are interactive, intuitive, always available, and current.

If you are going to attempt to establish a QRC, then I would suggest five best practices:

1.  Eliminate duplication.  When designing a QRC process (and supporting tools), it is instructive to consider the principles of Occam’s razor as a guide:

– The principle of plurality – Plurality should not be used without necessity

– The principle of parsimony – It is pointless to do with more what can be done with less

These two principles of Occam’s razor are useful because the goal is simply to flag unreasonable forecasts that do not pass a QRRC, so that planners can focus their energy on asking critical questions only about those cases.

2. Minimize human time and effort by maximizing the power of cloud computing.  Leverage the fast, ubiquitous computing power of the cloud to deliver results that are self-explanatory and always available everywhere, providing an immediately understood context that identifies invalid forecasts. 

3. Eliminate inconsistent judgments By following #1 and #2 above, you avoid inconsistent judgments that vary from planner to planner, from product family to product family, or from region to region.

4. Reflect reality.  Calculations of upper and lower bounds of the sanity range should reflect the fact that uncertainty grows with each extension of a forecast into a future time period.  For example, the upper and lower limits of the sanity range for one period into the future should usually be narrower than the limits for two or three periods into the future.  These, in turn, should be narrower than the limits calculated for more distant future periods.  Respecting reality also means capturing seasonality and cyclical demand in addition to month-to-month variations.  A crucial aspect of respecting reality involves calculating the sanity range for future demand from what actually happened in the past so that you do not force assumptions of normality onto the sanity range (this is why bootstrapping is essential).  Among other things, this will allow you to predict the likelihood of over- and under-shipment.

5. Illustrate business performance, not just forecasting performance with sanity ranges.  The range should be applied, not only from time-period to time period, but also cumulatively across periods such as months or quarters in the fiscal year.

If you are engaged in demand planning or sales and operations planning, I welcome to know your thoughts on performing a QRC.

Thanks again for stopping by Supply Chain Action.  As we leave the work week and recharge for the next, I leave you with the words of John Ruskin:

“When skill and love work together, expect a masterpiece.”

Have a wonderful weekend!

Ava Ex Machina and the Retail Supply Chain

There is a lot of buzz about the “autonomous” supply chain these days.  The subject came up at a conference I attended where the theme was the supply chain of 2030.  But, before we turn out the lights and lock the door to a fully automated, self-aware, supply chain “Ava Ex Machina”, let’s take a moment and put this idea into some perspective.

 

The Driverless Car Analogy

I’ve heard the driverless vehicle used as an analogy for the autonomous supply chain.  However, orchestrating the value network where goods, information and currency pulse freely, fast, and securely between facilities, organizations, and even consumers, following the path of least resistance (aka the digital supply chain), may prove to be even more complex than driving a vehicle.  Digital technologies, such as additive manufacturing, blockchain, and more secure IoT infrastructure, advance the freedom, speed and security of these flows.  As these technologies makes more automation possible, as well as a kind of “autonomy”, the difficulty and importance of guiding these flows becomes ever more crucial.  

Most sixteen-year-old adolescents can successfully drive a car, but you may not want to entrust your global value network to them.

Even Elon Musk says that Tesla autopilot will never be perfect.

 

Before you can have an autonomous supply chain, you need to accelerate the Detect, Diagnose, Direct Cycle – let’s call it the 3-D Cycle, for short, not just because it’s alliterated, but because each “D” is one of three key dimensions of orchestrating your value network.  In fact, as you accelerate the 3-D Cycle, you will learn just how much automation and autonomy makes sense.

 

Figure 1

Detect, Diagnose, Direct

The work of managing the value network has always been to make the best plan, monitor issues, and respond effectively and efficiently.  However, since reality begins to diverge almost immediately from even the best plans, perhaps the most vital challenges in orchestrating a value network are monitoring and responding.

 

In fact, every plan is really just a response to the latest challenges and their causes.

 

So, if we focus on monitoring and responding, we are covering all the bases of what planners and executives do all day . . . every day.

 

Monitoring involves detecting and diagnosing those issues which require a response.  Responding is really directing the next best action.  That’s why we can think in terms of the “Detect, Diagnose, Direct Cycle”:

 

  1. Detect (and/or anticipate) market requirements and the challenges in meeting them
  2. Diagnose the causes of the challenges, both incidental and systemic
  3. Direct the next best action within the constraints of time and cost

 

The 3-D Cycle used to take a month, in cases where it was even possible.  Digitization – increased computing power, more analytical software, the availability of data – have made it possible in a week.  Routine, narrowly defined, short-term changes are now addressed even more quickly under a steady state – and a lot of controlled automation is not only possible in this case, but obligatory.  However, no business remains in a steady state, and changes from that state require critical decisions which add or destroy significant value.   

 

You will need to excel at managing and accelerating the 3-D Cycle, if you want to win in digital economy.

 

There is no industry where mastering this Cycle is more challenging than in retail, but the principles apply across most industries.

Data Is the Double-edged Sword

The universe of data is exploding exponentially from growing connections among organizations, people and things, creating the need for an ever-accelerating 3-D Cycle.  This is especially relevant for retailers, and it presents both a challenge and an opportunity for competing in the digital economy with a digital value network.  Redesigned, retail supply chains, enabled with analytics and augmented reality, are not only meeting, but raising consumer expectations.

 

Figure 2

Amazon’s re-imagination of retail means that competitors must now think in terms of many-to-many flows of information, product, and cash along the path of least resistance for the consumer (and not just to and from their own locations).  This kind of value network strategy goes beyond determining where to put a warehouse and to which stores it should ship.  Competing in today’s multi-channel world can mean inventing new ways to do business, even in the challenging fashion space – and if it is happening in fashion, it would be naive to think rising consumer expectations can be ignored in other retail segments, or even other industries.  Consider a few retail examples:

Zara leverages advanced analytics, not only to sense trends, but also to optimize pricing and operations in their vertically integrated supply chain.

Stitch Fix is changing the shopping model completely, providing more service with less infrastructure.

Zolando has been so successful in creating a rapid response supply chain that they are now providing services to other retailers.

Nordstrom, of all organizations, is opening “inventoryless” stores.

Walmart has been on an incredible acquisition and partnership spree, recently buying Flipkart and, as early as two years ago, partnering with JD.com.  And, then, there is the success of Walmart.com.

Target is redesigning the way their DC’s work, creating a flow-through operation with smaller replenishment quantities.

 

Yet, many companies are choking on their own ERP data, as they struggle to make decisions on incomplete, incorrect and disparate data.  So, while the need for the 3-D Cycle to keep pace grows more ever more intense, some organizations struggle to do anything but watch.  The winners will be those who can capitalize on the opportunities that the data explosion affords by making better decisions faster through advanced analytics (see Figure 2).

 

The time required just to collect, clean, transform and synchronize data for analysis remains the fundamental barrier to better detection, diagnosis and decisions in the value network.  A consolidated data store that can connect to source systems and on which data can be consolidated, programmatically “wrangled”, and updated into a supra data set forms a solid foundation on which to build better detection, diagnosis, and decision logic that can execute in “relevant time”.  This can seem like an almost insurmountable challenge, but it is not only doable with today’s technology, it’s becoming imperative.  And, it’s now possible to work off of a virtual supra data set, but that’s a discussion for another day.

Detect, Diagnose and Direct with Speed, Precision & Advanced Analytics

Detection of incidental challenges (e.g. demand is surging or falling based on local demographics, a shipment is about to arrive late, a production shortfall at a vendor, etc.) in your value network can be significantly automated to take place in almost real-time, or at least, in relevant time.   Detection of systemic challenges will be a bit more gradual and is based on the metrics that matter to your business, such as customer service, days of supply, etc., but it is the speed and, therefore, the scope, that is now possible that drives better visibility from detection.

 

Diagnosing the causes of incidental problems is only limited by the organization and detail of your transactional data.  Diagnosing systemic challenges requires a hierarchy of metrics with respect to cause and effect (such as the SCOR® model).  Certainly, diagnosis can now happen with new speed, but it is the combination of speed and precision that makes a new level of understanding possible through diagnosis.

 

With a clean, complete, synchronized data set that is always available and always current, as well as a proactive view of what is happening and why, you need to direct the next best action while it still matters.  You need to optimize your trade-offs and perform scenario and sensitivity analysis.

 

Figure 3, below, shows both incidental/operational and systemic/strategic examples for all three dimensions of the 3-D Cycle.

Figure 3

Speed in detection, speed and precision in diagnosis, and the culmination of speed, precision and advanced analytics in decision-making give you the power to transpose the performance of your value network to levels not previously possible.  Much of the entire 3-D Cycle and the prerequisite data synchronization can be, and will be, automated by industry leaders.  Just how “autonomous” those decisions become remains to be seen.

 

Fortunately, you don’t need Ava Ex Machina, but your ability to develop a faster and better (and, I will even say more autonomous) 3-D Cycle is fundamental to your journey toward the digital transformation of your value network.

 

The basic ideas of detecting, diagnosing and directing are not novel to supply chain professionals and other business executives.   However, the level of transparency, speed, precision and advanced analytics that are now available mandate a new approach and promise dramatic results.  Some will gradually evolve toward a better, faster 3-D cycle.  The greatest rewards will accrue to enterprises that climb each hill with a vision of the pinnacle, adjusting as they learn.  These organizations will attract more revenue and investment.  Companies that don’t capitalize on the possibilities will be relegated to hoping for acquisition by those that do.

 

Admittedly, I’m pretty bad at communicating graphically, but I’ve attempted to draft a rudimentary visual of what the architecture to support a state-of-the-art 3-D Cycle could look like (below), as a vehicle for facilitating discussion.

 

 

The convergence of cloud business intelligence (BI) technology and traditional advanced planning solutions supports my point, and that is definitely happening.

I work for Blue Yonder, so I’m biased, but I think the capabilities Blue Yonder is building out in support of the 3D Cycle are “spot-on”.

As the unstoppable train of time pulls us into the weekend, I leave you with this thought to ponder: 

“Life is short, so live it well, in gratitude, honesty and hope, and never take it for granted.”

 

Have a wonderful weekend!

The Potential for Proven Analytics and Planning Tools in Healthcare Delivery

I’ve spent time in a hospital.  I was well cared for, but I didn’t like it, and I worried about the cost and how well I would be able to recover (pretty well, so far!)  Also, my daughter is a doctor (obviously takes after her mom!), so healthcare is obviously an area of high interest for me.

To say that managing a large, disaggregated system such as healthcare delivery with its multitude of individual parts, including patients, physicians, clinics, hospitals, pharmacies, rehabilitation services, home nurses, and more is a daunting task would be an understatement.

Like other service or manufacturing systems, different stakeholders have different goals, making the task even more challenging.

Patients want safe, effective care with low insurance premiums. 

Payers, usually not the patient, want low cost. 

Health care providers want improved outcomes, but also efficiency.

The Institute of Medicine has identified six quality aims for twenty-first century healthcare:  safety, effectiveness, timeliness, patient-centeredness, efficiency, and equity.  Achieving these goals in a complex system will require an holistic understanding of the needs and goals of all stakeholders and simultaneously optimizing the tradeoffs among them.

This, in turn, cannot be achieved without leveraging the tools that have been developed in other industries.  These have been well-known and are summarized in the table below.

While the bulk of the work and benefits related to these tools will lie at the organization level, such techniques can be applied directly to healthcare systems, beginning at the environmental level and working back left down to the patient, as indicated by the check marks in the table.

A few examples of specific challenges that can be addressed through systems analysis and planning solutions include the following:

1 – Optimal allocation of funding

2 – Improving patient flow through rooms and other resources

3 – Capacity management and planning

4 – Staff scheduling

5 – Forecasting, distributing and balancing inventories, both medical/surgical and pharmaceuticals

6 – Evaluation of blood supply networks

Expanding on example #5 (above), supply chain management solutions help forecast demand for services and supplies and plan to meet the demand with people, equipment and inventory.  Longer term mismatches can be minimized through sales and operations planning, while short-term challenges are addressed with inventory rebalancing, and scheduling.

Systems analysis techniques have been developed over many years and are based on a large body of knowledge.  These types of analytical approaches, while very powerful, require appropriate tools and expertise to apply them efficiently and effectively.  Many healthcare delivery organizations have invested in staff who have experience with some of these tools, including lean thinking in process design and six-sigma in supply chain management.  There are also instances where some of the techniques under “Optimizing Results” are being applied, as well as predictive modeling and artificial intelligence.  But, more remains to be done, even in the crucial, but less hyped, areas like inventory management.  Some healthcare providers may initially need to depend on resources external to their own organizations as they build their internal capabilities.

I leave you with a thought for the weekend – “Life is full of tradeoffs.  Choose wisely!”

Multi-echelon Inventory Optimization and Lean/Six Sigma

The Emerging Role of Optimization in Business Decisions

For many, there was a point in the past when the idea of “optimization” used to summon images of Greek letters juxtaposed in odd arrangements kept in black boxes that spewed out inscrutable results.  Optimization was sometimes considered a subject best left to impractical theorists, sequestered in small cubicles deep in the bowels of the building to which few paths led and from which there were no paths out.  From that perspective, optimization was something that had to be reserved for special cases of complex decisions that had little relevance for day-to-day operations.

That perception was never reality, and today, growing numbers of business managers now understand the role of optimization.  Those leaders who leverage it intelligently, are not just valuable assets, but absolutely essential to achieving and sustaining a more valuable enterprise.  Global competition mandates that executives never “settle” in their decisions, but that they constantly make higher quality decisions in less time.  Optimization helps decision-makers do just that.  The exponential increases in computing power along with advances in software have enabled the use of optimization in an ever-widening array of business decisions.

 

How Lean Thinking Helps

Lean principles are applied to drive out waste.  One of the most predominant lean tools used for identifying waste is Value Stream Mapping which helps identify eight wastes, including overproduction, waiting, over-processing, unnecessary inventory, handling and transportation, defects, and underutilized talent.  In inventory management, this often happens through a reduction of lead times and lot sizes.

The reduction of lead times and lot sizes through lean in manufacturing has focused on reducing setup time to eliminate waiting and work-in-process inventory, as well as the frequent use of physical and visible signals for replenishment of consumption.  One of the challenges is that consumption or “true demand” at the end of the value network is never uniform for each time period, despite efforts to level demand upstream.

Acting and deciding are closely related and need to be carefully coordinated so that the end result does not favor faster execution over optimizing complex, interdependent tradeoffs.

 

The Importance of Six Sigma

Six sigma pursues reduced variability in processes.  In manufacturing, this relates most directly to controlling a production process so that defective lots or batches do not result.  It has been encapsulated with the acronym of DMAIC:  design, measure, analyze, improve, control.

There has been a natural interest in the convergence of lean and six sigma in manufacturing and inventory management so that fixed constraints like lead time and lot size can be continuously attacked while, at the same time, identifying the root causes of variability and reducing or eliminating them.

There are obvious limitations to both efforts, of course.  Physics and economics of reducing lot size and lead time place limitations on lean efforts and six sigma is limited by physics and market realities (the marketplace is never static).

Until it is possible to economically produce a lot size of one with a lead time of zero and infinite capacity, manufacturers will need to optimize crucial tradeoffs. 

 

Crucial Tradeoffs for Manufacturers

In a manufacturing organization, 60% to 70% of all cash flow is often spent on the cost of goods sold – purchasing raw materials, shipping and storing inventory, transforming materials or components into finished goods, and distributing the final product to customers.  So, deciding just how much to spend on which inventory in what location and when to do it is crucial to success in a competitive global economy.  Uncertain future demand and variations in supply chain processes mandate continuous lean efforts to reduce lead times and lot/batch sizes as well as six sigma efforts to reduce and control variability.

As long as we operate in a dynamic environment, manufacturing executives will continue to face decisions regarding where (across facilities and down the bill of material) to make-to-order vs. make-to-stock and how much buffer inventory to position between operations to adequately compensate for uncertainty while minimizing waste.

Taken in complete isolation, the determination of a buffer for a make-to-stock finished good at the point of fulfillment for independent demand measured by service level (not fill rate) is not trivial, but it is tractable.  But, for almost every manufacturer, the combination of processes that link levels in the BOM and geographically dispersed suppliers, facilities and customers, means that many potential buffer points must be considered.  Suddenly, the decision seems almost impossible, but advances in inventory theory and multi-echelon inventory optimization have been developed and proven effective in addressing these tradeoffs, improving working capital position and growing cash flow.

 

So What?

In many cases, the key levers for eliminating waste and variability in any process are the decision points.  When decisions are made that consider all the constraints, multiple objectives, and dependencies with other decisions, significant amounts of wasted time and effort are eliminated, thereby reducing the variability inherent in a process where the tradeoffs among conflicting goals and limitations are not optimized.

Intuition or incomplete, inadequate analysis will only result in decisions that are permeated with additional cost, time and risk.  Optimization not only delivers a better starting point, it gives decision-makers insight about the inputs that are most critical to a given decision.  Put another way, a planner or decision-maker needs to know the inputs (e.g. resource constraints, demand, cost, etc.) in which a small change will change the plan and the inputs for which a change will have little impact.

Multi-echelon inventory optimization perfectly complements lean and six sigma programs to eliminate waste by optimizing the push/pull boundary (between make-to-stock and make-to-order) and inventory buffers as lean/six sigma programs drive down structural supply chain barriers (e.g. lead time and lot/batch size) and reduce variability (in lead times, internal processes and demand).

Given constant uncertainty in end-user demand and the economics of manufacturing in an extremely competitive global economy, business leaders cannot afford not to make the most of all the tools at their disposal, including lean, six sigma, and optimization.

Forecasting vs. Demand Planning

Often, the terms, “forecasting” and “demand planning”, are used interchangeably. 

The fact that one concept is a subset of the other obscures the distinction. 

Forecasting is the process of mathematically predicting a future event.

As a component of demand planning, forecasting is necessary, but not sufficient.

Demand planning is that process by which a business anticipates market requirements.  

This certainly involves both quantitative and qualitative forecasting.  But, demand planning requires holistic process that includes the following steps:

1.      Profiling SKU’s with respect to volume and variability in order to determine the appropriate treatment:

For example, high volume, low variability SKU’s will be easy to mathematically forecast and may be suited for lean replenishment techniques.  Low volume, low variability items maybe best suited for simple re-order point.  High volume, high variability will be difficult to forecast and may require a sophisticated approach to safety stock planning.  Low volume, high variability SKU’s may require a thoughtful postponement approach, resulting in an assemble-to-order process.  This profiling analysis is complemented nicely by a Quantitative Reasonability Range Check, which should be an on-going part of your forecasting process.

2.       Validating of qualitative forecasts from among functional groups such as sales, marketing, and finance
3.       Estimation of the magnitude of previously unmet demand
4.       Predicting underlying causal factors where necessary and appropriate through predictive analytics
5.       Development of the quantitative forecast including the determination of the following:

  • Level of aggregation
  • Correct lag
  • Frequency
  • Appropriate forecasting model(s)
  • Best settings for forecasting model parameters
  • Forecast consumption rules
  • Demand that you didn’t capture in sales or shipments (in the case of retail stockouts or fill/kill ordering in CPG)

6.      Rationalization of qualitative and quantitative forecasts and development of a consensus expectation
7.      Planning for the commercialization of new products
8.      Calculating the impact of special promotions
9.      Coordinating of demand shaping requirements with promotional activity
10.    Determination of the range and the confidence level of the expected demand
11.    Collaborating with customers on future requirements
12.    Monitoring the actual sales and adjusting the demand plan for promotions and new product introductions
13.    Identification of sources of forecast inaccuracies (e.g. sales or customer forecast bias, a change in the data that requires a different forecasting model or a different setting on an existing forecast model, a promotion or new product introduction that greatly exceeded or failed to meet expectations).

The proficiency with which an organization can anticipate market requirements has a direct and significant impact on revenue, margin and working capital, and potentially market share.  However, as an organization invests in demand planning, the gains tend to be significant in the beginning of the effort but diminishing returns are reached much more quickly than in many other process improvement efforts.

This irony should not disguise the fact that significant ongoing effort is required simply to maintain a high level of performance in demand planning, once it is achieved.

It may make sense to periodically undertake an exercise to (see #1 above) in order to determine if the results are reasonable, whether or not the inputs are properly being collected and integrated, and the potential for additional added value through improved analysis, additional collaboration, or other means.

I’ll leave you once again with a thought for the weekend – this time from Ralph Waldo Emerson:

“You cannot do a kindness too soon, for you never know how soon it will be too late.”

Thanks for stopping by and have a wonderful weekend!

Do You Need a Network Design CoE?

shutterstock_148723100

Licensed through Shutterstock. Copyright: Sergey Nivens

Whether you formally create a center of excellence or not, an internal competence in value network strategy is essential.  Let’s look at a few of the reasons why.

Weak Network Design Limits Business Success

From an operational perspective, the greatest leverage for revenue, margin, and working capital lies in the structure of the supply chain or value network.*

It’s likely that more than half of the cost and capabilities of your value network remain cemented in its structure, limiting what you can achieve through process improvements or even world-class operating practices.

You can improve the performance of existing value networks through an analysis of their structural costs, constraints, and opportunities to address common maladies like these:

  • Overemphasis on a single factor.  For example, many companies have minimized manufacturing costs by moving production to China, only to find that the “hidden” cost associated with long lead times has hurt their overall business performance.
  • Incidental Growth.  Many value networks have never been “designed” in the first place.  Instead, their current configuration has resulted from neglect and from the impact from mergers and acquisitions.
  • One size fits all.  If a value network was not explicitly designed to support the business strategy, then it probably doesn’t.  For example, stable products may need to flow through a low-cost supply chain while seasonal and more volatile products, or higher value customers, require a more responsive path.

It’s Never One and Done

At the speed of business today, you must not only choose the structure of your value network and the flow of product through that network, you must continuously evaluate and evolve both.  

Your consideration of the following factors and their interaction should be ongoing:

  1. Number, location and size of factories and distribution centers
  2. Qualifications, number and locations of suppliers
  3. Location and size of inventory buffers
  4. The push/pull boundary
  5. Fulfillment paths for different types of orders, customers and channels
  6. Range of potential demand scenarios
  7. Primary and alternate modes of transportation
  8. Risk assessment and resiliency planning

The best path through your value network structure for each product, channel and/or customer segment combination can be different.  It can also change over the course of the product life-cycle.

In fact, the best value network structure for an individual product may itself be a portfolio of multiple supply chains.  For example, manufacturers sometimes combine a low-cost, long lead-time source in Asia with a higher cost, but more responsive, domestic source.

Focus on the Most Crucial Question – “Why?”

The dynamics of the marketplace mandate that your value network cannot be static, and the insights into why a certain network is best will enable you to monitor the business environment and adjust accordingly.

Strategic value network analysis must yield insight on why the proposed solution is optimal.  This will always be more important than the “optimal” recommendation.

In other words, the context is more important than the answer.

The Time Is Always Now

For all of these reasons, value network design is more than an ad hoc, one-time, or even periodic project.  At today’s speed of competitive global business, you must embrace value network design as an essential competency applied to a continuous process.

You may still want to engage experienced and talented consultants to assist you in this process from time to time, but the need for continuous evaluation and evolution of your value network means that delegating the process entirely to other parties will definitely cost you money and market share.  

Competence Requires Capability

Developing your own competence in network design will require that you have access to enabling software.  The best solution will be a platform that facilitates flexible modeling with powerful optimization, easy scenario analysis, intuitive visualization, and collaboration.  

The right solution will also connect to multiple source systems, while helping you cleanse and prepare data. 

Through your analysis, you may find that you need additional “apps” to optimize particular aspects of your value network such as multi-stage inventories, transportation routing, and supply risk.  So, apps like these should be available to you on the software platform to use or tailor as required.  

The best platform will also accelerate the development of your own additional proprietary apps (with or without help), giving you maximum competitive advantage.  

You need all of this in a ubiquitous, scalable and secure environment.  That’s why cloud computing has become such a valuable innovation.  

A Final Thought

I leave you with this final thought from Socrates:  “The shortest and surest way to live with honor in the world is to be in reality what we appear to be.”

 

*I prefer the term “value network” to “supply chain” because it more accurately describes the dynamic collection of suppliers, plants, outside processors, fulfillment centers, and so on, through which goods, currency and data flow along the path of least resistance (seeking the lowest price, shortest time, etc.) as value is exchanged and added to the product en route to the final customer.

%d bloggers like this: