“Moneyball” and Your Business

 

It’s MLB playoff time, and my team (the Tribe) is there, again.  (Pregnant pause to enjoy the moment.)

A while back, the film “Moneyball” showed us how the Oakland A’s built a super-competitive sports franchise on analytics, essentially “competing on analytics”, within relevant business parameters of a major league baseball franchise.  The “Moneyball” saga and other examples of premier organizations competing on analytics were featured in the January 2006 Harvard Business Review article, “Competing on Analytics” (reprint R0601H) by Thomas Davenport, who also authored the book by the same name.

The noted German doctor, pathologist, biologist, and politician, Rudolph Ludwig Karl Virchow called the task of science “to stake out the limits of the knowable.”  We might paraphrase Rudolph Virchow and say that the task of analytics is to enable you to stake out everything that you can possibly know from your data.

So, what do these thoughts by Davenport and Virchow have in common?

In your business, you strive to make the highest quality decisions today about how to run your business tomorrow with the uncertainty that tomorrow brings.  That means you have to know everything you possibly can know today.  In an effort to do this, many companies have invested, or are considering an investment, in supply chain intelligence or various analytics software packages.  Yet, many companies who have made huge investments know only a fraction of what they should know from their ERP and other systems.  Their executives seem anxious to explore “predictive” analytics or “AI”, because it sounds good.  But, investing in software tools without understanding what you need to do and how is akin to attempting surgery with wide assortment of specialized tools, but without having gone to medical school.

Are you competing on analytics?

Are you making use of all of the data available to support better decisions in less time?

Can you instantly see what’s inhibiting your revenue, margin and working capital goals across the entire business in a context?

Do you leverage analytics in the “cloud” for computing at scale and information that is always on and always current?

I appreciate everyone who stops by for a quick read.  I hope you found this both helpful and thought-provoking.

As we enter this weekend, I leave you with one more thought that relates to “business intelligence” — this time, attributed to Socrates:

“The wisest man is he who knows his own ignorance.”

Do you know what you don’t know?  Do I?

Have a wonderful weekend!

Does Your Demand Planning Process Include a “Quantitative Sanity Range Evaluation”?

There is a process that should be part of both your demand planning and your sales and operations planning.  The concept is simple – how do you find the critical few forecasts that require attention, so that planner brainpower is expended on making a difference and not hunting for a place to make a difference?  I have heard this process called a “Forecast Reality Check” and a “Forecast Reasonability Check”.  Just to be difficult, I’ll call it a Quantitative Sanity Range Evaluation (I have my own reasons.)  It may be similar in some ways to analyzing “forecastability” or a “demand curve analysis”, but different in at least one important aspect – the “sanity range” is calculated through bootstrapping (technically, you would be bootstrapping a confidence interval, but please allow me the liberty of a less technical name – “sanity range”).  A QSRE can be applied across industries, but it’s particularly relevant in consumer products, where I have seen a version of this implemented first hand by Allan Gray, a really smart gentleman – back when I worked with him for End-to-End Analytics – just so you know I didn’t think this all up on my own!

At a minimum, QSRE must consider the following components:

  1. Every level and combination of the product and geographical hierarchies
  2. A quality quantitative forecast
  3. A sanity range out through time
  4. Metrics for measuring how well a point forecast fits within the sanity range
  5. Tabular and graphical displays that are interactive, intuitive, always available, and current.

If you are going to attempt to establish a QSRE, then I would suggest five best practices:

1.  Eliminate duplication.  When designing a QSRE process (and supporting tools), it is instructive to consider the principles of Occam’s razor as a guide:

– The principle of plurality – Plurality should not be used without necessity

– The principle of parsimony – It is pointless to do with more what can be done with less

These two principles of Occam’s razor are useful because the goal is simply to flag unreasonable forecasts that do not pass a QSRE, so that planners can focus their energy on asking critical questions only about those cases.

2. Minimize human time and effort by maximizing the power of cloud computing.  Leverage the fast, ubiquitous computing power of the cloud to deliver results that are self-explanatory and always available everywhere, providing an immediately understood context that identifies invalid forecasts. 

3. Eliminate inconsistent judgments By following #1 and #2 above, you avoid inconsistent judgments that vary from planner to planner, from product family to product family, or from region to region.

4. Reflect reality.  Calculations of upper and lower bounds of the sanity range should reflect the fact that uncertainty grows with each extension of a forecast into a future time period.  For example, the upper and lower limits of the sanity range for one period into the future should usually be narrower than the limits for two or three periods into the future.  These, in turn, should be narrower than the limits calculated for more distant future periods.  Respecting reality also means capturing seasonality and cyclical demand in addition to month-to-month variations.  A crucial aspect of respecting reality involves calculating the sanity range for future demand from what actually happened in the past so that you do not force assumptions of normality onto the sanity range (this is why bootstrapping is essential).  Among other things, this will allow you to predict the likelihood of over- and under-shipment.

5. Illustrate business performance, not just forecasting performance with sanity ranges.  The range should be applied, not only from time-period to time period, but also cumulatively across periods such as months or quarters in the fiscal year.

If you are engaged in demand planning or sales and operations planning, I welcome to know your thoughts on performing a QSRE.

Thanks again for stopping by Supply Chain Action.  As we leave the work week and recharge for the next, I leave you with the words of John Ruskin:

“When skill and love work together, expect a masterpiece.”

Have a wonderful weekend!

A Digital Value Network Needs an Accelerated “3-D” Cycle

Photo licensed through iStockphoto

The strength of any chain is defined by its weakest link.  A supply chain, or as I prefer to say, a value network, is similarly constrained.  By orchestrating the flow of material, information and cash through your value network, you can prevent negative business impact from weak links by detecting anomalies, diagnosing their causes, and directing the next best action before there is a serious business impact.  Do you need some kind of self-aware artificial intelligence to make this work?  Let’s think about that for a minute.

 

 

 

Photo licensed through Shutterstock

 

 

There is a lot of buzz about the “autonomous” supply chain these days.  The subject came up at a conference I attended where the theme was the supply chain of 2030.  But, before we turn out the lights and lock the door to a fully automated, self-aware, supply chain “Skynet”, let’s take a moment and put this idea into some perspective.

 

 

 

The Driverless Car Analogy

I’ve heard the driverless vehicle used as an analogy for the autonomous supply chain.  However, orchestrating the value network where goods, information and currency pulse freely, fast, and securely between facilities, organizations, and even consumers, following the path of least resistance (aka the digital supply chain), may prove to be even more complex than driving a vehicle.  Digital technologies, such as additive manufacturing, blockchain, and more secure IoT infrastructure, advance the freedom, speed and security of these flows.  As these technologies makes more automation possible, as well as a kind of “autonomy”, the difficulty and importance of guiding these flows becomes ever more crucial.  

 

Most sixteen-year-old adolescents can successfully drive a car, but you may not want to entrust your global value network to them.

 

 

Before you can have an autonomous supply chain, you need to accelerate the Detect, Diagnose, Direct Cycle – let’s call it the 3-D Cycle, for short, not just because it’s alliterated, but because each “D” is one of three key dimensions of orchestrating your value network.  In fact, as you accelerate the 3-D Cycle, you will learn just how much automation and autonomy makes sense.

 

Figure 1

Detect, Diagnose, Direct

The work of managing the value network has always been to make the best plan, monitor issues, and respond effectively and efficiently.  However, since reality begins to diverge almost immediately from even the best plans, perhaps the most vital challenges in orchestrating a value network are monitoring and responding.

 

In fact, every plan is really just a response to the latest challenges and their causes.

 

So, if we focus on monitoring and responding, we are covering all the bases of what planners and executives do all day . . . every day.

 

Monitoring involves detecting and diagnosing those issues which require a response.  Responding is really directing the next best action.  That’s why we can think in terms of the “Detect, Diagnose, Direct Cycle”:

 

  1. Detect (and/or anticipate) market requirements and the challenges in meeting them
  2. Diagnose the causes of the challenges, both incidental and systemic
  3. Direct the next best action within the constraints of time and cost

 

The 3-D Cycle used to take a month, in cases where it was even possible.  Digitization – increased computing power, more analytical software, the availability of data – has made it possible in a week.  Routine, narrowly defined, short-term changes are now addressed even more quickly under a steady state – and a lot of controlled automation is not only possible in this case, but obligatory robotic process automation (RPA).  However, no business remains in a steady state, and changes from that state require critical decisions which add or destroy significant value.   

 

You will need to excel at managing and accelerating the 3-D Cycle, if you want to win in digital economy.

 

There is no industry where mastering this Cycle is more challenging than in retail, but the principles apply across most industries.

Data Is the Double-edged Sword

The universe of data is exploding exponentially from growing connections among organizations, people and things, creating the need for an ever-accelerating 3-D Cycle.  This is especially relevant for retailers, and it presents both a challenge and an opportunity for competing with your digital value network in the global digital economy.

 

For example, redesigned, retail supply chains, enabled with analytics and augmented reality (AR), are not only meeting, but raising consumer expectations.

 

Figure 2

Amazon’s re-imagination of retail means that competitors must now think in terms of many-to-many flows of information, product, and cash along the path of least resistance for the consumer (and not just to and from their own locations).  This kind of value network strategy goes beyond determining where to put a warehouse and to which stores it should ship.  Competing in today’s multi-channel world can mean inventing new ways to do business, even in the challenging fashion space – and if it is happening in fashion, it would be naive to think rising consumer expectations can be ignored in other retail segments, or even other industries.  Consider a few retail examples:

Zara leverages advanced analytics, not only to sense trends, but also to optimize pricing and operations in their vertically integrated supply chain.

Stitch Fix is changing the shopping model completely, providing more service with less infrastructure.

Zolando has been so successful in creating a rapid response supply chain that they are now providing services to other retailers.

Nordstrom, of all organizations, is opening “inventoryless” stores.

Walmart has been on an incredible acquisition and partnership spree, recently buying Flipkart and, as early as two years ago, partnering with JD.com.  And, then, there is the success of Walmart.com.

Target is redesigning the way their DC’s work, creating a flow-through operation with smaller replenishment quantities.

 

Yet, many companies are choking on their own ERP data, as they struggle to make decisions on incomplete, incorrect and disparate data.  So, while the need for the 3-D Cycle to keep pace grows more intense, some organizations struggle to do anything but watch.  The winners will be those who can capitalize on the opportunities that the data explosion affords by making better decisions faster through advanced analytics (see Figure 2).

 

The time required just to collect, clean, transform and synchronize data for analysis remains fundamental barrier to better detection, diagnosis and decisions in the value network.  A consolidated data store that can connect to source systems and on which data can be consolidated, programmatically “wrangled”, and updated into a supra data set forms a solid foundation on which to build better detection, diagnosis, and decision logic that can execute in “relevant time”.  This can seem like an almost insurmountable challenge, but it is not only doable with today’s technology, it’s becoming imperative.  And, it’s now possible to work off of a virtual supra data set, but that’s a discussion for another day.

Detect, Diagnose and Direct with Speed, Precision & Advanced Analytics

Detection of incidental challenges (e.g. demand is surging or falling based on local demographics, a shipment is about to arrive late, a vendor is behind on production, etc.) in your value network can be significantly automated to take place in almost real-time, or at least, in relevant time.   Detection of systemic challenges will be a bit more gradual and is based on the metrics that matter to your business, such as customer service, days of supply, etc., but it is the speed and, therefore, the scope, that is now possible that drives better visibility through detection.

 

Diagnosing the causes of incidental problems is only limited by the organization and detail of your transactional data.  Diagnosing systemic challenges requires a hierarchy of metrics with respect to cause and effect (such as the SCOR® model).  Certainly, diagnosis can now happen with new speed, but it is the combination of speed and precision that makes a new level of understanding possible through diagnosis.

 

With a clean, complete, synchronized data set that is always available and always current, as well as a proactive view of what is happening and why, you need to direct the next best action while it still matters.  You need to optimize your trade-offs and perform scenario and sensitivity analysis.

Figure 3, below, shows both incidental/operational and systemic/strategic examples for all three dimensions of the 3-D Cycle.

Figure 3

 

 

Speed in detection, speed and precision in diagnosis, and the culmination of speed, precision and advanced analytics in decision-making give you the power to transpose the performance of your value network to levels not previously possible.  Much of the entire 3-D Cycle and the prerequisite data synchronization can be, and will be, automated by industry leaders.  Just how “autonomous” those decisions become remains to be seen.

 

Fortunately, you don’t need “Skynet”, but a faster and better 3-D Cycle is fundamental to your journey toward the digital transformation of your value network.

 

The basic ideas of detecting, diagnosing and directing are not novel to supply chain professionals and other business executives.   However, the level of transparency, speed, precision and advanced analytics that are now available mandate a new approach and promise dramatic results.  Some will gradually evolve toward a better, faster 3-D cycle.  The greatest rewards will accrue to enterprises that climb each hill with a vision of the pinnacle, adjusting as they learn.  These organizations will attract more revenue and investment.  Companies that don’t capitalize on the possibilities will be relegated to hoping for acquisition by those that do.

 

Admittedly, I’m pretty bad at communicating graphically, but I’ve attempted to draft a couple rudimentary visuals of what the architecture to support a state-of-the-art 3-D Cycle could look like (Figures 4 below), as a vehicle for facilitating discussion.  I do realize that the divisions I’m showing between Cloud, IoT, Extended Apps, and ERP are somewhat arbitrary and definitely fluid.

 

Figure 4

 

 

 

 

 

 

 

 

 

 

 

So, I imagine that I’m at least partly wrong, and could be completely wrong-headed . . . but, then again, maybe not.  I will say this:  The convergence of business intelligence (BI) technology and traditional advanced planning solutions supports my point, and that is definitely happening.  Cloud BI solutions (e.g. Aera, Birst, Board) incorporate at least some machine learning (ML) algorithms for prediction, while Oracle, Microsoft, IBM, and SAP are all making ML available in their portfolios, adjacent to their BI applications.  Many advanced planning vendors are pitching “control towers” which are really an attempt to move toward combining BI capabilities and planning.  Logility recently purchased Halo which embeds ML.  Even Oracle and SAP have built their cloud supply chain planning solutions with embedded BI, really making an effort toward a faster, better 3-D Cycle

So, the future would appear to be now.  If that’s true, you have to ask yourself whether your current paradigm for value network planning will guide you to competitive advantage or leave you hoping that someone else will ask you to the dance.

I’ll leave you with this thought for the weekend:  I know more now than I once did, especially about how much I still don’t know that I don’t know.

Have a wonderful weekend!

 

The Potential for Proven Analytics and Planning Tools in Healthcare Delivery

I’ve spent time in a hospital.  I was well cared for, but I didn’t like it, and I worried about the cost and how well I would be able to recover (pretty well, so far!)  Also, my daughter is a doctor (obviously takes after her mom!), so healthcare is obviously an area of high interest for me.

To say that managing a large, disaggregated system such as healthcare delivery with its multitude of individual parts, including patients, physicians, clinics, hospitals, pharmacies, rehabilitation services, home nurses, and more is a daunting task would be an understatement.

Like other service or manufacturing systems, different stakeholders have different goals, making the task even more challenging.

Patients want safe, effective care with low insurance premiums. 

Payers, usually not the patient, want low cost. 

Health care providers want improved outcomes, but also efficiency.

The Institute of Medicine has identified six quality aims for twenty-first century healthcare:  safety, effectiveness, timeliness, patient-centeredness, efficiency, and equity.  Achieving these goals in a complex system will require an holistic understanding of the needs and goals of all stakeholders and simultaneously optimizing the tradeoffs among them.

This, in turn, cannot be achieved without leveraging the tools that have been developed in other industries.  These have been well-known and are summarized in the table below.

While the bulk of the work and benefits related to these tools will lie at the organization level, such techniques can be applied directly to healthcare systems, beginning at the environmental level and working back left down to the patient, as indicated by the check marks in the table.

A few examples of specific challenges that can be addressed through systems analysis and planning solutions include the following:

1 – Optimal allocation of funding

2 – Improving patient flow through rooms and other resources

3 – Capacity management and planning

4 – Staff scheduling

5 – Forecasting, distributing and balancing inventories, both medical/surgical and pharmaceuticals

6 – Evaluation of blood supply networks

Expanding on example #5 (above), supply chain management solutions help forecast demand for services and supplies and plan to meet the demand with people, equipment and inventory.  Longer term mismatches can be minimized through sales and operations planning, while short-term challenges are addressed with inventory rebalancing, and scheduling.

Systems analysis techniques have been developed over many years and are based on a large body of knowledge.  These types of analytical approaches, while very powerful, require appropriate tools and expertise to apply them efficiently and effectively.  Many healthcare delivery organizations have invested in staff who have experience with some of these tools, including lean thinking in process design and six-sigma in supply chain management.  There are also instances where some of the techniques under “Optimizing Results” are being applied, as well as predictive modeling and artificial intelligence.  But, more remains to be done, even in the crucial, but less hyped, areas like inventory management.  Some healthcare providers may initially need to depend on resources external to their own organizations as they build their internal capabilities.

I leave you with a thought for the weekend – “Life is full of tradeoffs.  Choose wisely!”

The Time-to-Action Dilemma in Your Supply Chain



dreamstime_m_26639042If you can’t answer these 3 sets of questions in less than 10 minute
s
(and I suspect that you can’t), then your supply chain is not the lever it could be to
 drive more revenue with better margin and less working capital:
1) What are inventory turns by product category (e.g. finished goods, WIP, raw materials, ABC category, etc.)?  How are they trending?  Why?
2) What is the inventory coverageHow many days of future demand can you satisfy with the inventory you have on-hand right now?
3) Which sales orders are at risk and why?  How is this trending?  And, do you understand the drivers?

Global competition and the transition to a digital economy are collapsing your slack time between planning and execution at an accelerating rate.

 

You need to answer the questions that your traditional ERP and APS can’t from an intelligent source where data is always on and always current so your supply chain becomes a powerful lever for making your business more valuable.

 

You need to know the “What?” and the “Why? so you can determine what to do before it’s too late.  

 

Since supply chain decisions are all about managing interrelated goals and trade-offs, data may need to come from various ERP systems, OMS, APS, WMS, MES, and more, so unless you can consolidate and blend data from end-to-end at every level of granularity and along all dimensions, you will always be reinventing the wheel when it comes to finding and collecting the data for decision support.  It will always take too long.  It will always be too late.

 

You need diagnostic insights so that you can know not just what, but why.  And, once you know what is happening and why, you need to know what to do — your next best action, or, at least, viable options and their risks . . . and you need that information in context and “in the moment”.

 

In short, you need to detect opportunities and challenges in your execution and decision-making, diagnose the causes, and direct the next best action in a way that brings execution and decision-making together.

 

Some, and maybe even much, of detection, diagnosis and directing the next best action can be automated with algorithms and rules.  Where it can be, it should be.  But, you will need to monitor the set of opportunities that can be automated because they may change over time.

 

If you can’t detect, diagnose and direct in a way that covers your end-to-end value network in the time that you need it, then you need to explore how you can get there because this is at the heart of a digital supply chain.

As we approach the weekend, I’ll leave you with this thought to ponder:
Leadership comes from a commitment to something greater than yourself that motivates maximum contribution from yourself and those around you, whether that is leading, following, or just getting out of the way.”
Have a wonderful weekend!

The Value Network, Optimization & Intelligent Visibility

The supply chain is more properly designated a value network through which many supply chains can be traced.  Material, money and data pulse among links in the value network, following the path of least resistance, accelerated by digital technologies, including additive manufacturing, more secure IoT infrastructure, RPA, and, potentially, blockchain. 

If each node in the value network makes decisions in isolation, the total value in one or more supply chain paths becomes less than it could be.  

In the best of all possible worlds, each node would eliminate activities that do not add value to its own transformation process such that it can reap the highest possible margin, subject to maximizing and maintaining the total value proposition for a value network or at least a supply chain within a value network.  This is the best way to ensure long-term profitability, assuming a minimum level of parity in bargaining position among trading partners and in advantage among competitors.

Delivering insights to managers that allow them to react in relevant-time without compromising the value of the network (or a relevant portion of a network, since value networks interconnect to form an extended value web) remains a challenge.

The good news is that many analytical techniques and the mechanisms for delivering them in timely, distributed ways are becoming ubiquitous.  For example, optimization techniques and scenarios can provide insights into profitable ranges for decisions, marginal benefits of incremental resources, and robustness of plans, given uncertain inputs.

When these techniques are combined with intelligent visibility that allows you detect and diagnose anomalies in your supply chain, then everyone can make coordinated decisions as they execute.  

I will leave you with these words of irony from Dale Carnegie, “You make more friends by becoming interested in other people than by trying to interest other people in yourself.”

Thanks again for stopping by and have a wonderful weekend!

Supply Chain Action Blog

The supply chain is more properly designated a value network through which many supply chains can be traced. Material, money and data pulse among links in the value network, following the path of least resistance.

If each node in the value network makes decisions in isolation, the potential grows for the total value in one or more supply chain paths to be less than it could be

In the best of all possible worlds, each node would eliminate activities that do not add value to its own transformation process such that it can reap the highest possible margin, subject to maximizing and maintaining the total value proposition for a value network or at least a supply chain within a value network.  This is the best way to ensure long-term profitability, assuming a minimum level of parity in bargaining position among trading partners and in advantage among…

View original post 166 more words

What are “Analytics”?

“Analytics” is one of those business buzz words formed by transforming an adjective into a noun. 

So forceful and habitual is such misuse of language that one might call it a compulsion among business analysts and writers.

The term “analytics” commonly refers to software tools that can be used to organize, report, and sometimes visualize data in attempt to lend meaning for decision-makers.  These capabilities have been advanced in recent years so that many types of graphical displays can be readily employed to expose data and try to make information from it.   “Analytics” has been used to refer to a very broad array of software applications.  Numerous industry analysts have attempted to segment these applications in various ways.  “Analytics” refers to so many kinds of applications that it is useful to establish some broad categories.

A simple, though imperfect, scheme such as the following may be the most useful where the potential value that can be achieved through each category increases from #1 through #4.

Reports – repetitively run displays of pre-aggregated and sorted information with limited or no user interactivity.

Dashboards – frequently updated displays of performance metrics which can be displayed graphically.  They are ideally tailored to the needs of a given role.  Dashboards support the measurement of performance, based on pre-aggregated data with some user selection and drill-down capability.  Hierarchies of metrics have been created that attempt to facilitate a correlation between responsibility and performance indicators.  The most common such model is the Supply Chain Operations Reference Model (SCOR Model) that was created and is maintained by the Supply Chain Council.

Data Analysis Tools – interactive software applications that enable data analysts to dynamically aggregate, sort, plot, and otherwise explore data, based on metadata.  Significant advancements have been made in recent years to dramatically expand the options for visualizing data and accelerating the speed at which these tools can generate results.

Decision Support/Management Science Tools – simulation, optimization, and other approaches to multi-criteria decisions which require the application of statistics and mathematical modeling and solving.

Let’s focus on Decision Support/Management Science Tools, the category with the most potential for adding value to strategic (high value) decision-making in a sustained fashion. 

So, then, if that is what analytics are, do they enable higher quality decisions in less time, and if so, to what extent are those better decisions in less time driving cash flow and value for their business?  These are critically important questions because improved, integrated decision-making that is based in facts and adjusted for risk drives the bottom line.

Execution is good, but operational execution under a poor decision set is like going fast in the wrong direction.  It is bad, but perhaps not immediately fatal.  Poor decisions will put a business under very quickly.

Enabling higher quality decisions in less time depends on the decision-maker, but it can also depend on the tools employed and the skills of the analysts using the tools. 

The main activities in using these tools involve the following:

  1. Sifting through the oceans of data that exist in today’s corporate information systems
  2. Synthesizing the relevant data into information (a thoughtful data model within an analytical application is helpful, but not sufficient)
  3. Presenting it in such a way so that a responsible manager can combine it with experience and quickly know how to make a better decision

Obtaining a valuable result requires careful preparation and skilled interaction, asking the right questions initially and throughout the above activities.

Some of the questions that need to be asked before the data can be synthesized into information in a useful way are represented by those given below:

  1. What is the business goal?
  2. What decisions are required to reach the goal?
  3. What are the upper and lower bounds of each decision? (Which outcomes are unlivable?)
  4. How sensitive is one decision to the outcome of other, interdependent decisions?
  5. What risks are associated with a given decision outcome?
  6. Will a given decision today impact the options for the same decision tomorrow?
  7. What assumptions are implicitly driven by insufficient data?
  8. How reliable is the data upon which the decision is based?
    • Is it accurate?
    • How much of the data has been driven by one-time events that are not repeatable?
    • What data is missing?
    • Is the data at the right level of detail?
    • How might the real environment in which the decision is to be implemented be different from that implied by the data and model (i.e. an abstraction of reality)?
    • How can the differences between reality and its abstraction be reconciled so that the results of the model are useful?

Ask the right questions.

Know the relative importance of each.

Understand which techniques to apply in order to prioritize, analyze and synthesize the data into useful information that enables faster, better decisions.

We often think of change when a new calendar year rolls around.  Since this is my first post of the new year, I”ll leave you with one of my favorite quotes on change.  Leo Tolstoy:  “Everybody thinks of changing humanity, and nobody thinks of changing himself.”

Have a wonderful weekend!

%d bloggers like this: