Digital Transformation of Supply Chain Planning

A couple of years back, IBM released a study “Digital operations transform the physical” (capitalization theirs).

Citing client examples the report states,

“Perpetual planning enables more accurate demand and supply knowledge, as well as more accurate production and assembly status that can lower processing and inventory costs . . .

Analytics + real-time signals = perpetual planning to optimize supply chain flows

They are describing the space to which manufacturers, retailers, distributors, and even service providers (like say, health care delivery) need to move rapidly with value network planning.  This is a challenging opportunity for software providers, and the race is on to enable this in a scalable way.  The leading software providers must rapidly achieve the following:

1)      Critical mass by industry

2)      Custody of all the necessary data and flows necessary for informing decision-makers of dynamic, timely updates of relevant information in an immediately comprehensible context

3)      Fast, relevant, predictive and prescriptive insights that leverage up-to-the-minute information

Some solution provider (or perhaps a few, segmented by industry) is going to own the “extended ERP” (ERP+ or EERP to coin a phrase?) data.  Whoever does that will be able to provide constantly flowing intelligent metrics and decision-support (what IBM has called “perpetual planning”) that all companies of size desperately need.  This means having the ability to improve the management of working capital, optimize value network flows, minimize value network risk, plan for strategic capacity and contingency, and, perhaps most importantly, make decisions that are “in the moment”, spans the entire value networkThat is the real prize here and a growing number of solution providers are starting to turn their vision toward that goal.  Many are starting to converge on this space from different directions – some from inside the enterprise and some from the extra-enterprise space.

The remaining limiting factor for software vendors and their customers aspiring to accomplish this end-to-end, up-to-the-moment insight and analysis remains the completeness and cleanliness of data.  In many cases, too much of this data is just wrong, incomplete, spread across disparate systems, or all of the above.  That is both a threat and an opportunity.  It is a threat because speedily providing metrics, even in the most meaningful visual context is worse than useless if the data used to calculate the metrics are wrong.  An opportunity exists because organizations can now focus on completing, correcting and harmonizing the data that is most essential to the metrics and analysis that matter the most.

What are you doing to achieve this capability for competitive advantage? 

I work for AVATA, and in the interest of full disclosure, AVATA is an Oracle partner.  With that caveat, I do believe that Oracle Supply Chain Planning Cloud delivers leading capabilities for perpetually optimizing your value network, while Oracle Platform-as-a-Service (soon to integrate DataScience.com) provides unsurpassed power to wrangle data and innovate at the “edge”.  Both are worth a look.

Thanks for stopping by.  I’ll leave you with this thought of my own:

“Ethical corporate behavior comes from hiring ethical people.  Short of that, no amount of rules or focus on the avoidance of penalties will succeed.”

Have a wonderful weekend!

Advertisements

Analytics vs. Humalytics

I have a background in operations research and analysis so, as you might expect, I am biased toward optimization and other types of analytical models for supply chain planning and operational decision-making.   Of course, you know the obvious and running challenges that users of these models face:

  1. The data inputs for such a model are never free of defects
  2. The data model that serves as the basis for a decision model is always deficient as a representation of reality
  3. As soon a model is run, the constantly evolving reality increasingly deviates from the basis of the model

Still, models and tools that help decision-makers integrate many complex, interrelated trade-offs can enable significantly better decisions.

But, what if we could outperform very large complex periodic decision models through a sort of “existential optimization” or as a former colleague of mine put it, “humalytics“?

Here is the question expressed more fully:

If decision-makers within procurement, manufacturing and distribution and sales had the “right time” information about tradeoffs and how their individual contributions were affecting their performance and that of the enterprise, could they collectively outperform a comprehensive optimization/decision model that is run periodically (e.g. monthly/quarterly) in the same way that market-based economies easily outperform centrally planned economies?

I would call this approach “humalytics” (borrowed from a former colleague, Russell Halper, but please don’t blame him for the content of this post!), leveraging a network of the most powerful analytical engines – the human brain, empowered with quantified analytical inputs that are updated in “real-time” or as close to that as required.  In this way, the manager can combine these analytics with factors that might not be included in a decision model from their experience and knowledge of the business to constantly make the best decisions with regard to replenishment and fulfillment through “humalytics”, resulting in constantly increasing value of the organization.

In other words, decision-maker would have instant, always-on access to both performance metrics and the tradeoffs that affect them.  For example, a customer service manager might see a useful visualization of actual total cost of fulfillment (cost of inventory and cost of disservice) and the key drivers such as actual fill rates and inventory turns as they are happening, summarized in the most meaningful way, so that the responsible human can make the most informed “humalytical” decisions.

Up until now, the answer has been negative for at least two reasons:

A. Established corporate norms and culture in which middle management (and maybe sometimes even senior management) strive diligently for the status quo.

B. Lack of timely and complete information and analytics that would enable decision-makers to act as responsible, accountable agents within an organization, the same way that entrepreneurs act within a market economy.

With your indulgence, I’m going to deal with these in reverse order.

A few software companies have been hacking away at obstacle B.”, and we may be approaching a tipping point where the challenge of accurate, transparent information and relevant, timely analytics can be delivered in near real-time, even on mobile devices, allowing the human decision-makers to constantly adjust their actions to deliver continuously improved performance.  This is what I am calling “humalytics”.

But the network of human decision-makers with descriptive metrics is not enough.  Critical insights into tradeoffs and metrics come through analytical models, leveraging capabilities like machine learning, optimization, RPA, maybe in the form of “mini-apps” models that operate on a curated supra set of data that is always on and always current.  So, at least two things are necessary:

1. Faster optimization and other analytical modeling techniques from which the essential information is delivered in “right time” to each decision-maker

2. An empowered network of (human) decision-makers who understand the quantitative analytics that are delivered to them and who have a solid understanding of the business and their part in it

In current robotics research there is a vast body of work on algorithms and control methods for groups of decentralized cooperating robots, called a swarm or collective. (ftp://ftp.deas.harvard.edu/techreports/tr-06-11.pdf)  Maybe, we don’t need swarm of robots, after all.  Maybe we just need empowered decision-makers who not only engage in Sales and Operations Planning (or, if you will, Integrated Business Planning), but integrated business thinking and acting on an hourly (or right time) basis.

What think you?

If you think this might make sense for your business, or if you are working on implementing this approach, I’d be very interested to learn your perspective and how you are moving forward.

I leave you with these words from Leo Tolstoy, “There is no greatness where there is no simplicity, goodness, and truth.”

Have a wonderful weekend!

Metrics, Symptoms and Cash Flow

Metrics can tell us if we are moving in the right or wrong direction and that, in itself, is useful.  However, metrics by themselves do not help us assess our competitive position or aid us in prioritizing our efforts to improve.

To understand our competitive position, metrics need to be benchmarked against comparable peers. Benchmarking studies are available, some of them free.  They tell us where we stand relative to others in the industry, provided the study in question has sufficient other data points from your industry (or sub-industry segment).

Many times, getting relevant benchmarks proves challenging.  But once we have the benchmarks, then what?

Does it matter if we do not perform as well as the benchmark of a particular metric?  If that metric affects revenue growth, margins, return on assets, or available capital, it may matter significantly.

But, we are left to determine how to improve the metrics and with which metrics to start.  

Consider an alternative path.  Begin with the undesirable business symptoms that keep you up at night and give you that bad feeling in the pit of your stomach.

Relate business processes to symptoms and map potential root causes within each business process to undesirable business symptoms.

Multiple root causes in multiple business processes can relate to a single symptom.  On the other hand, a single root cause may be causing multiple undesirable symptoms.  Consequently, we must quantify and prioritize the root causes.

“Finding the Value in Your Value Network” outlines a straightforward, systematic approach to prioritizing and accelerating process improvements.  I hope you will take a look at that article and let me know your thoughts.

Thanks for having a read.  Remember that “You cannot do a kindness too soon, for you never know how soon it will be too late.”

Have a wonderful weekend!

Ten Key Questions for Spare Parts Planning

1)      How does demand behave?  To answer this, you must ask yourself the following:

a) How often do you expect to receive a demand for a given spare part?

b) What is the expected magnitude of a demand transaction when it occurs?

c) Are  failures based on age, or use, or both?

d) How large is the installed base of a given spare part?

(Technical note:  Historical data on the time interval between demands (inter-demand interval) and on the order of magnitude of demand transactions (demand order sizes) can be used to estimate the likelihood of a demand occurring in a given time interval and its transaction size using statistical techniques such as Croston’s method or a compound Poisson distribution[i].  If failure of a part (and the subsequent need for replacement or repair) is time-dependent (as many are), then the combined use of a type of Erlang distribution to estimate the interval and a Poisson distribution to estimate the quantity may be more appropriate.[ii])

2)      Are some spare parts much more important than others?  Some of the key questions here include the following:

a)  How expensive is the item?

b)  What does it cost to store and transport the item?

c)  What are the consequences when the part fails?

d) Do the consequences of a failure compound with time?

Factors like these are used to determine a part’s “criticality”.  For more critical parts, you usually need to have more safety stock.  That buffer stock may need to be geographically distributed near potential sources of demand and/or expedited delivery may be necessary.

(Technical note:  Where the answer to “d.” above is “yes”, then the use of an Erlang distribution may be helpful to estimate the duration of the wait time for the customer.)

3)      Is demand affected by the conditions in which the part is used and/or the level of preventative maintenance that it receives?  In cases where the answer is “yes”, then the algorithms and statistical approaches that are used to calculate demand and inventory requirements may need to be tailored for different situations.

4)      Are the magnitude and sources of demand such that requirements can be modeled as a trend over time with appropriate adjustments for seasonality?  If so, then this simplifies the planning considerably in that the requirements for such a spare part may be able to be modeled in a way that is similar to non-spare parts.

5)      Is the supply network composed of a single stage or multiple echelons?  The calculations for safety stock are different for each structure.

6)      Are the failed parts scrapped (consumed) or repaired and used again?  Where parts to be replaced are repaired and used again, a determination must be made as to whether the failed part is beyond repair and should be scrapped.  When new replacement parts should be purchased must also be determined, and tracking by serial number is required.

7)      Are the purchase, or manufacturing, batch sizes significantly larger than the expected demand quantities in a period?  If so, then this should be taken into account when planning resupply and safety stock.

8)      Is the supply constrained by a budget?  If so, you should take this into consideration when planning supply as well.

9)      Will requirements be reviewed periodically or continuously?  Continuous (or nearly continuous) review systems are quite feasible with modern communications and computing technology, and in many, if not most cases, they can yield better results.  A continuous review system reevaluates supply requirements each time an actual demand is generated.  Where a large number of spare parts must be monitored by a limited number of planners, however, it may be more practical to periodically review requirements (once per inter-demand interval or some multiple/fraction of inter-demand interval) for non-critical items and plan safety stock to account for demand variability over lead-time and the review period as well as variability in lead-time for those non-critical items.  Critical items, particularly expensive ones, should probably be evaluated continuously.  It may be useful to segment spare parts by levels of criticality and treat each group of parts accordingly.

10)   How many time periods of inventory do I currently have on hand for each spare part and for each category of spare parts?

A fundamentally sound approach to spare parts management can be summarized as follows:

  1. Understand your demand patterns
  2. Classify your parts (e.g. by criticality, by demand pattern, and/or cost, etc.)
  3. Apply the appropriate forecasting model and statistics
  4. Employ an efficient algorithm to find inventory targets and purchase quantities that meets the specific needs, constraints, and goals of your business including the structure of your value network, requirements of your customers, and the costs and risks/uncertainties that you face.  Keep the approach as simple as possible within those conditions.

(Note:  Many formal statistical approaches require assumptions that may not hold in your business.  In most cases, a heuristic that searches for a high value solution that conforms to real-world constraints, leveraging statistical theory where appropriate, is the most useful approach. )

5.  Deploy this algorithm through an easy-to-use, fast, visual and interactive tool that functionally meets your specific requirements, but  doesn’t “break the bank”.

As we enter this weekend, I leave you with one more thought — this time, from Socrates:  “The wisest man is he who knows his own ignorance.

Have a wonderful weekend!


[i] Lengu, D., Syntetos, A., Babai, M., “Spare Parts Management:  A Distribution-based Approach”, Salford Business School Working Paper Series, 342/11.

[ii] Saidane, S., Babai, M., Aguir, S., Korbaa, O., “Spare Parts Inventory Systems under an Increasing Failure Rate Demand Interval Distribution”, Proceedings of the 41st International Conference on Computers and Industrial Engineering,  2011.

Accelerating and Prioritizing Process Improvement Efforts

Process/Symptom/Value Matrix

If the supply chain were (as the term implies) really a linear, sequential relationship of entities exchanging goods, information, and currency in a binary, stepwise flow, it might not be quite so difficult.

However, you know that the supply chain really is a complex network of inter-dependent people, organizations and fixed assets, and that goods, data and currency pulse from node to node in almost any direction following the path of least resistance.

This “value network” contains the money you seek.  But, since the movements of material, data and cash are continuous, dynamic, and interdependent, the benchmarking results that tell you that you have some aggregate potential do not often change, despite your efforts to the contrary.

You don’t have a crystal ball, but you still need to make higher quality (i.e. more profitable/valuable) decisions in less time.

How do you prioritize your efforts to attack undesirable business symptoms with better decision processes so that revenue growth, return on net assets, and profitability are increased?

Let’s start with what we know.

We know the undesirable business symptoms.  These are the measurements that make our sleep fitful, cause our hair to turn gray, churn our stomachs, and make some business meetings uncomfortable.

Undesirable business symptoms directly and negatively impact financial measures that determine the value of the enterprise (e.g. Economic Value Added or EVA® ).  We want to ameliorate these symptoms.  A symptom that does not significantly inhibit revenue growth, return on net assets, or margins can be addressed as a secondary priority.

The problem is that the undesirable business symptoms are aggregate measures.  They require decomposition in terms of the root cause.  A Process/Symptom/Value Matrix (PSV Matrix) relates business decision processes to symptoms, ultimately allowing us to link potential root causes within each decision process to undesirable business symptoms.

For more on this idea, I’d be honored if you had a look at my short paper, “Finding Value in Your Value Network“.

Something else to ponder about as you head into this weekend is the motto of the State of New Hampshire, “Live free or die,” spoken by General John Stark on July 31, 1809.

Take good care and have a wonderful weekend!

Answering Questions that Your ERP and APS Can’t

I have worked for some large software companies.  I loved many aspects of those experiences.  But, do you want to know the toughest part of those jobs?  It was meeting someone from one of their customers and getting a reaction like, “Oh, you are the enemy!”  Yes, that’s literally what one actually woman said to me verbatim. 

Now, of course, she did not stop to consider all the things that were much easier for her company to do and to keep straight with an integrated, enterprise suite of software applications from accounting through manufacturing to procurement

What flashed to her mind were the things that she and her colleagues could not do with the software.  That’s the way it is with software.  The first things we notice are what we can’t do, not what we can now do that was impossible before.

What we cannot do with our enterprise software systems, however, is a real problem.  To make matters worse, your knowledge workers can easily out-think a software application vendor’s development cycle.  There are some fairly legitimate reasons for this, of course, but the fact remains that ERP and APS vendors have no shot at supporting the need for ongoing innovation on the part of you and your colleagues who must make constantly make faster, better decisions.

Of course, that explains the popularity of Microsoft desktop applications like Excel and Access.

In the meantime, business managers who are not paid to be statisticians, data scientists, algorithm engineers, or programming experts struggle to build and constantly recreate the tools they need to do their work.

They are paid to ask important questions and find alternative answers, but the limitations of their enterprise resource planning (ERP) and advanced planning systems (APS) systems keep them wrestling just to find and format data in order to answer the really challenging analytical and/or strategic questions.

While it is possible to hire (internally or externally) the talent that combines deep business domain knowledge with data analysis, decision-modeling and programming expertise to build customized, spreadsheets in Microsoft Excel™, faster, more comprehensive and ubiquitous cloud solutions are emerging.  What’s needed in this approach is the following:

  1. A hyper-fast, super-secure, cloud of transaction level data where like data sources are blended, dissimilar data sources are correlated, and most of the hundreds of basic calculations are performed.  This needs to be a single repository for all data of any type from any source.
  2. A diagnostic layer where the calculations are related to each other in a cause and effect relationship
  3. A continuous stream of decision-support models (e.g. econometric forecasts, optimization models, simulation, etc.)

If you ever need to make better decisions than your competition (Duh!), then this kind of framework may speed your time to value and result in a more secure, scalable, and collaborative solution than desktop tools or point software solutions can provide.   

Such a platform would allow you to see what is happening in business context, why it is happening, and a recommendation for your next best action.    

It also provides a way to build decision “apps” for your business.  You know what apps on your phone have done for you.  Imagine what apps for your enterprise could do . . . and all the data is already there or could be there, regardless of data type or source.

I will leave you with these words from William Pollard, “Learning and innovation go hand-in-hand.  The arrogance of success is to think that what you did yesterday will be sufficient for tomorrow.  (http://www.thinkexist.com)

Have a wonderful weekend!

Quick Survey of Production Scheduling Heuristics – Part 2

In this week’s post, I will complete my quick survey of approaches for manufacturing scheduling.  Which one of these or other methods that you select should be based on the three fundamental principles I outlined here.

Repetitive Make-to-Stock Manufacturing

Those familiar with “Lean” principles will be familiar with leading techniques in this manufacturing environment.  There are two key principles:

  • Constant Flow – keep production levels constant with a mixed-model schedule and level demand (The trick here is how to determine the mixed-model schedule and what to do about demand variation, but that’s a topic for another time.)
  • Demand Pull – only manufacture or purchase additional materials or parts when they are required by the next point downstream in the value network, ultimately by the end customer/consumer.  This is often facilitated with visual (or sometimes electronic) “Kanban” signals.

These principles are facilitated through two kinds of continuous activities, both of which are mandatory for any manufacturing operation, regardless of scheduling approach:

  • Continuous elimination of waste (wait time, WIP, poor quality)
  • Constant effort to reduce setup times and batch (or lot) sizes

For Semi-conductor or Other Network Routing Challenges

In semi-conductor manufacturing, a chip may take a variety of paths through production, depending on the outcome of any given operation.  Chips are tested after each operation.  The next step in the routing depends on the results of that test.  This variability creates a very difficult scheduling challenge.  Perhaps the most important aspect is tracking the results of each outcome and identifying the next operation for each chip.  Capacity usually has to be planned based on the probability that a piece will follow a given routing, based on history.  There are others who know more about scheduling in this environment, but I hope I have at least outlined the basic challenge.

A Word About Optimization

Many scheduling software programs that claim to “optimize” (see second paragraph here) the schedule merely find the lowest sum of hypothetical or relative costs such as the cost of a stockout, the cost of carrying inventory, the cost of setups, etc.  These are rarely, if ever, a pure optimum, even assuming that the relative costs are exact and static, which, of course, they aren’t.  So, in effect, you still have a heuristic.  However, if the costs are accurate enough, this can be a very helpful technique, particularly when determining the best sequence to minimize total setup time.   Constraint programming (like that in IBM’ s CPLEX optimization engine) has been very effectively combined with graphic schedule visualization tools like a Gannt Chart and other visual and analytical aids in an holistic solution called IBM ILOG Optimization Decision Manager Enterprise.

Optimization is also very helpful when determining what period of time a “product wheel” (predetermined sequence of products that is repeated – used in many batch processing environments) should cover and how much of the wheel each product should consume (or whether a given product should be omitted from the sequence for a given iteration of the “product wheel”).

This discussion begs two final questions:

1)      How do you select the best method?

A very useful technique for evaluating heuristic scheduling approaches is discrete event simulation.  It is also very helpful in evaluating the results of an optimization.  There are relatively inexpensive software packages that are designed explicitly for this purpose.  Historical data or statistically valid distributions of input variables (e.g. demand, processing times, etc.) can be directly input into a simulation model.

2)      How should you enable or support it – what tool or tools are required?

If, for a reasonable investment and in keeping with the three fundamental principles I laid out at the end of my last post, you can find a scheduling software application that meets your needs and is integrated with your shop floor transaction system, then that is probably a wise course.  However, that may be difficult to do, particularly in a job shop environment.  Other possible approaches include building scheduling logic in Visual Basic and Excel or tailoring a constraint programming solution specifically for your scheduling challenges with some of the optimization tools that are available today (like IBM ILOG Optimization Decision Manager Enterprise).

In any case, a visual schedule board on the shop floor is often very helpful.  The trick is to keep your information system and scheduling logic in synch with the board.  One possible option is to display the scheduling board on a large screen on the shop floor and automatically update it with transactions that are entered via a bar code on the shop floor.

This basic survey of scheduling approaches has not been exhaustive, and it hasn’t dealt explicitly with the use of queuing theory, but I hope it has been a reasonable overview and brief tutorial.  I promise to write about something other than production scheduling next time!

I leave you with the words of Ralph Waldo Emerson, who said, “Though we travel the world over to find the beautiful, we must carry it within us or we find it not.”

Have a wonderful weekend!

%d bloggers like this: