How Can I Improve My Forecast Accuracy?

Imagine

Imagine that Amanda (a completely imaginary person) is a demand planner at Kool Komfort Foods (a completely imaginary company, also branded as K2), a nationwide producer of healthy comfort foods.  She got her bachelor’s in Mechanical Engineering about five years ago.  After a short stint in manufacturing process engineering in another industry, she got interested in the business side of things and moved into supply chain planning, starting in demand planning.  After taking a couple of on-line courses and getting an APICS certification, she seized on the opportunity to be a junior demand planner at K2.  Through her affinity for math and her attention to detail, Amanda earned a couple of promotions and is now a senior demand planner.  At present, she currently manages a couple of product lines, but has her sights set on becoming a demand planning manager and mentoring others.

Amanda has been using some of the common metrics for forecast accuracy, including MAPE (mean absolute percentage error) and weighted MAPE, but the statistical forecast doesn’t seem to improve and the qualitative inputs from marketing and sales are hit or miss.  Her colleague, Jamison, uses the standard deviation of forecast error to plan for safety stock, but there are still a lot of inventory shortages and overages.  

Amanda has heard her VP, Dmitry, present to other department heads how good the forecast is, but when he does that, he uses aggregate measures and struggles when he is asked to explain why order fill rate is not improving, if the forecast is so good.

Amanda wonders what is preventing her from getting better results at the product/DC level, where it counts.  She would love to have it at the product/customer or product/store level, but she knows that she will need better results at the product/DC level before she can do that.  She is running out of explanations for her boss and the supply planning team.  She has been using some basic forecasting techniques that she has programmed into Excel, like single and double exponential smoothing as well as moving average, and even linear regression.  She is sure the math is correct, but the results have been disappointing. 

Amanda’s company just bought a commercial forecasting package.  She was hoping that would help.  It is supposed to run a bunch of models and select the best one and optimize the parameters, but so far, the simpler models perform the best and are no better – and sometimes worse – than her Excel spreadsheet.

Amanda has been seeing a lot of posts on LinkedIn about “AI”.  She has been musing to herself about whether there is some magic bullet in that space that might deliver better results.  But, she hasn’t had time to learn much about the details of that kind of modeling.  In fact, she finds it all a bit overwhelming, with all of the hype around the topic.

And, anyway, forecasts will always be wrong, they will always change, and the demand planner will always take the blame.  Investments in forecasting will inevitably reach diminishing returns, but for every improvement in forecast accuracy, there are cascading benefits through the supply chain and improvements in customer service.  So, what can Amanda and her company do to make sure they are making the most of the opportunity to anticipate market requirements without overinvesting and losing focus on the crucial importance of developing an ever more responsive value network to meet constantly changing customer requirements?

Unfortunately, there really is no “silver bullet” for forecasting, no matter how many hyperbolic adjectives are used by a software firm in their pitch.  That is not to say that a software package can’t be useful, but you need to really understand what you need and why before you go shopping.  

Demand planning consists of both quantitative and a qualitative analysis.  Since the quantitative input can be formulated and automated (not that it’s easy or quick), it can be used for calculating and updating a probabilistic range for anticipated demand over time. 

A good quantitative forecast requires hard work and skilled analysis.  Creating the best possible quantitative forecast (without reaching diminishing returns) will provide a better foundation for, and even improve, qualitative input from marketing, sales, and others.

Profiling

One of the first things you need to do is understand the behavior of the data.  This requires profiling the demand by product and location (either shipping plant/DC or customer location – let’s call that a SKU for ease of reference) with respect to volume and variability in order to determine the appropriate modeling approach.  For example, a basic approach is as follows: High volume, low variability SKU’s will be easy to mathematically forecast and may be suited for lean replenishment techniques.  

  • Low volume, low variability items may be best suited for simple reorder point replenishment.
  • High volume, high variability SKU’s will be difficult to forecast and may require a sophisticated approach to safety stock planning.
  • Low volume, high variability SKU’s may require a thoughtful postponement approach, resulting in an assemble or make-to-order process.  
  • A more sophisticated approach would involve the use of a machine learning for classification that might find clusters of demand along more dimensions.

Profiling analysis can be complemented nicely by a Quantitative Reasonability Range Check (see below), which should be an on-going part of your forecasting process.

Once you have profiled the data, you can start to develop the quantitative forecast, but you will need to consider the questions:

  1. What is the appropriate level of aggregation for forecasting?
  2. What forecast lag should I use?
  3. How frequently should I forecast?
  4. What are the appropriate quantitative forecast models?
  5. How should I initialize the settings for model parameters?
  6. How should I consume the forecast?
  7. How will I compensate for demand that I couldn’t capture?
  8. What metrics should I use to measure forecast accuracy?

Let’s consider each of these questions, in turn.

A. Level of Aggregation

The point of this analysis is to determine which of the following approaches will provide you with the best results:

  • Forecasting at the lowest level and then aggregating up
  • Forecasting at a high level and just disaggregating down
  • Forecasting at a mid-level and aggregating up and, also, disaggregating down

B. Correct Lag

If you forecast today for the demand you expect tomorrow, you should be pretty accurate because you will have the most information possible, prior to actually receiving orders.  The problem with this is obvious.  You can’t to react to this forecast (which will change each day up until you start taking orders for the period you are forecasting) by redistributing or manufacturing product because that takes some time.

Since you cannot procure raw materials, manufacture, pack, or distribute instantly, the “lead time” for these activities needs to be taken into account.  So, you need to have a forecast lag.  For example, if you need a month to respond to a change in demand, then, you would need to forecast this month for next month.  You can continue to forecast next month’s demand as you move through this month, but it’s unlikely you will be able to react, so when you measure forecast accuracy, you need to measure it at the appropriate lag.

C. Frequency

Should you generate a new forecast every day? Every week?  Or, just once a month?  This largely depends on when you can get meaningful updates to your forecast inputs such as sales orders, shipment history, or updates to industry and any syndicated or customer data (whether leading or trailing indicators) that are used in your quantitative forecast.

D. Appropriate Forecasting Model(s)

So, what mathematical model should you use?  This is a key question, but as you can see, certainly not the only one.

The mathematical approach can depend on many factors, including, but not limited to, the following:

  • Profiling (discussed above)
  • Available and meaningful trailing and leading indicators
  • Amount of history needed for the model vs. history that’s still relevant
  • Forecasting a distribution of demand vs. forecasting the actual distribution 
  • Explainability vs. accuracy of the model
  • The appearance of accuracy vs. useful accuracy (overfitting a model to the past)
  • Treatment of qualitative data (e.g., geography, holiday weekends, home football game, etc.)

A skilled data scientist can be a huge help.  A plethora of techniques is available, but a powerful machine learning (or other) technique can be like a sharp power tool.  You need to know what you’re doing and how to avoid hurting yourself.

E. Initializing the Steady State Settings for Parameters

Failure to properly initialize the parameters of a statistical model can cause it to underachieve.  In the case of Holt-Winters 3 parameter smoothing, for example, the modeler needs to have control over how much history is used for initializing the parameters.  If too little history is used, then forecasts will likely be very unreliable. 

When it comes to machine learning, there are two kinds of parameters – hyperparameters and model parameters.  Training can optimize the model parameters, but knowledge, experience and care are required to select techniques that are likely to help and to set the hyper parameters for running models that will give you good results.

F. Forecast Consumption Rules

There are a few things to consider when you consume the forecast with orders.  For example, you might want to bring forward previously unfulfilled forecasts (or underconsumption) from the previous period(s), or there may be a business reason to simply treat consumption in each week or month in isolation.

You may want to calculate the forecast consumption more frequently than you generate a new forecast.

G. Compensating for Demand You Couldn’t Capture

This is a particular challenge in the retail and CPG industries.  In CPG, many orders from retail customers are placed and fulfilled on a “fill or kill” basis.  The CPG firm fulfills what it can with the inventory on hand and then cancels or “kills” the rest of the order.

In retail, a consumer may simply go to a competitor or order online if the slot for the product on the shelf in a given store is empty.

In either case, sales or shipment history will under-represent true demand for that period.  If you don’t accurately compensate for this, your history will likely drive your forecast model to under-forecast.

H. Metrics and Measurement

There are many measures of forecast accuracy that can be used.  A couple of key questions to answer include the following:

  1. Who is the audience and what is their interest?  Consider the sales organization which is interested in an aggregate measure of sales against their sales target, perhaps by sales group or geography.  On the other hand, customer service doesn’t really happen in aggregate.  If you want to have better customer service, you need to look at forecast accuracy at the SKU level.
  2. Are you measuring forecast error based on an assumed normal distribution that you have defined by projecting a mean and standard deviation?  Or, have you been able to use the actual distribution of forecast error, perhaps created through bootstrapping? 

Remember that you will need to measure forecast error at the correct lag.

Another thing you may need to keep in mind is that not everyone has been trained to understand forecast error and its interrelationship to inventory, safety stock, and fill rate.  You may have a bit of education to do from time to time, even for executives.

Price & Forecast

In most cases, demand is elastic with respect to price.  In other words, there is a relationship between what you charge for something and the demand for it.  This is why consumer packaged goods companies run promotions and fund promotions with retailers, and also, why retailers run their own promotions.  The goal is to grow sales without losing money and/or gain market share (possibly, incurring a short-term loss).  The overall goal is to increase gross margin in a given time period.  Many CPG companies make competing products – think of shampoo or beverages, or even automobiles or car batteries.  And, of course, retailers sell products from their CPG suppliers that compete for shelf space and share of wallet.  Many retailers even sell their own private label goods.  The trick is how to price competing products such that you gain sales and margin over the set of products.  

Just as in forecasting demand, there are both quantitative and qualitative approaches to optimizing pricing decisions which, then, in turn, need to be incorporated into the demand forecast.  The quantitative approach has two components:

  1. Using ML techniques to predict prie elasticity, considering history, future special events (home football game, holiday weekend, football team in playoffs, etc.), minimum and maximum demand, and perhaps other features.
  2. Optimizing the promotional offers so that margin is maximized.  For this, a mathematical optimization model may be best so that the total investment in promotional discount and allocations of that investment are respected, limits on cannibalization are enforced, and upper limits on demand are considered.

The Quantitative Reasonability Range Check

There is a process that should be part of both your demand planning and your sales and operations planning.  The concept is simple – how do you find the critical few forecasts that require attention, so that planner brainpower is expended on making a difference and not hunting for a place to make a difference?  A Quantitative Forecast Reasonability Range Check (or maybe QRC, for short) accomplishes this perfectly.  If the historical data is not very dense, then a “reasonability range” may need to be calculated through “bootstrapping”, a process of randomly sampling the history to create a more robust distribution.   Once you have this distribution, you can assign a probability to a future forecast and leverage that probability for safety stock planning as well.

At a minimum, a QRC must consider the following components:

  • Every level and combination of the product and geographical hierarchies
  • A quantitative forecast
  • An asymmetric prediction interval over time
  • Metrics for measuring how well a point forecast fits within the prediction interval
  • Tabular and graphical displays that are interactive, intuitive, always available, and current

If you are going to attempt to establish a QRC, then I would suggest five best practices:

Eliminate duplication.  When designing a QRC process (and supporting tools), it is instructive to consider the principles of Occam’s razor as a guide:

– The principle of plurality – Plurality should not be used without necessity

– The principle of parsimony – It is pointless to do with more what can be done with less

These two principles of Occam’s razor are useful because the goal is simply to flag unreasonable forecasts that do not pass a QRC, so that planners can focus their energy on asking critical questions only about those cases.

Minimize human time and effort by automating the math.  Leverage automation and, potentially, even cloud computing power, to deliver results that are self-explanatory and always available, providing an immediately understood context that identifies invalid forecasts. 

Eliminate inconsistent judgments.  By following #1 and #2 above, you avoid inconsistent judgments that vary from planner to planner, from product family to product family, or from region to region.

Reflect reality.  Calculations of upper and lower bounds of the prediction interval should reflect seasonality and cyclical demand in addition to month-to-month variations.  A crucial aspect of respecting reality involves calculating the reasonability range for future demand from what actually happened in the past so that you do not force assumptions of normality onto the prediction interval (this is why bootstrapping can be very helpful).  Among other things, this will allow you to predict the likelihood of over- and under-shipment.

Illustrate business performance, not just forecasting performance with prediction intervals.  The range should be applied, not only from time-period to time-period, but also cumulatively across periods such as months or quarters in the fiscal year.

Summary

Demand planning is both quantitative and qualitative.  In this paper, we have touched on the high points of the best practices for building a good quantitative forecasting foundation for your demand planning process.  In our imaginary case, Amanda still has some work to do, some of which lies outside of her expertise.  She will need to articulate the case for making an investment to improve the quantitative forecast and building a better foundation for qualitative input and a consensus demand planning process.  A relatively small improvement in forecast accuracy can have significant positive bottom and top-line impact.  

Amanda needs to convince her management to invest in a consulting service that will deliver the math, without the hype, and within the context of experience, so that she can answer the key quantitative questions every demand planner faces:

  • What is the profile of my demand data?
  • What is the appropriate level of aggregation for forecasting?
  • What forecast lag should I use?
  • How frequently should I forecast?
  • What are the appropriate quantitative forecast models?
  • How should I initialize the settings for model parameters?
  • How should I consume the forecast?
  • How will I compensate for demand that I couldn’t capture?
  • What metrics should I use to measure forecast accuracy?

Does Your Demand Planning Process Include a “Quantitative Reasonability Range Check”?

There is a process that should be part of both your demand planning and your sales and operations planning.  The concept is simple – how do you find the critical few forecasts that require attention, so that planner brainpower is expended on making a difference and not hunting for a place to make a difference?  I’ll call it a Quantitative Forecast Reasonability Range Check (or maybe QRC, for short).  It may be similar in some ways to analyzing “forecastability” or a “demand curve analysis”, but different in at least one important aspect – the “reasonability range” is calculated through bootstrapping (technically, you would be bootstrapping a confidence interval, but please allow me the liberty of a less technical name – “reasonability range”).  A QRC can be applied across industries, but it’s particularly relevant in consumer products.

At a minimum, QRC must consider the following components:

  1. Every level and combination of the product and geographical hierarchies
  2. A quality quantitative forecast
  3. A prediction interval over time
  4. Metrics for measuring how well a point forecast fits within the prediction interval
  5. Tabular and graphical displays that are interactive, intuitive, always available, and current.

If you are going to attempt to establish a QRC, then I would suggest five best practices:

1.  Eliminate duplication.  When designing a QRC process (and supporting tools), it is instructive to consider the principles of Occam’s razor as a guide:

– The principle of plurality – Plurality should not be used without necessity

– The principle of parsimony – It is pointless to do with more what can be done with less

These two principles of Occam’s razor are useful because the goal is simply to flag unreasonable forecasts that do not pass a QRRC, so that planners can focus their energy on asking critical questions only about those cases.

2. Minimize human time and effort by maximizing the power of cloud computing.  Leverage the fast, ubiquitous computing power of the cloud to deliver results that are self-explanatory and always available everywhere, providing an immediately understood context that identifies invalid forecasts. 

3. Eliminate inconsistent judgments By following #1 and #2 above, you avoid inconsistent judgments that vary from planner to planner, from product family to product family, or from region to region.

4. Reflect reality.  Calculations of upper and lower bounds of the sanity range should reflect the fact that uncertainty grows with each extension of a forecast into a future time period.  For example, the upper and lower limits of the sanity range for one period into the future should usually be narrower than the limits for two or three periods into the future.  These, in turn, should be narrower than the limits calculated for more distant future periods.  Respecting reality also means capturing seasonality and cyclical demand in addition to month-to-month variations.  A crucial aspect of respecting reality involves calculating the sanity range for future demand from what actually happened in the past so that you do not force assumptions of normality onto the sanity range (this is why bootstrapping is essential).  Among other things, this will allow you to predict the likelihood of over- and under-shipment.

5. Illustrate business performance, not just forecasting performance with sanity ranges.  The range should be applied, not only from time-period to time period, but also cumulatively across periods such as months or quarters in the fiscal year.

If you are engaged in demand planning or sales and operations planning, I welcome to know your thoughts on performing a QRC.

Thanks again for stopping by Supply Chain Action.  As we leave the work week and recharge for the next, I leave you with the words of John Ruskin:

“When skill and love work together, expect a masterpiece.”

Have a wonderful weekend!

Forecasting vs. Demand Planning

Often, the terms, “forecasting” and “demand planning”, are used interchangeably. 

The fact that one concept is a subset of the other obscures the distinction. 

Forecasting is the process of mathematically predicting a future event.

As a component of demand planning, forecasting is necessary, but not sufficient.

Demand planning is that process by which a business anticipates market requirements.  

This certainly involves both quantitative and qualitative forecasting.  But, demand planning requires holistic process that includes the following steps:

1.      Profiling SKU’s with respect to volume and variability in order to determine the appropriate treatment:

For example, high volume, low variability SKU’s will be easy to mathematically forecast and may be suited for lean replenishment techniques.  Low volume, low variability items maybe best suited for simple re-order point.  High volume, high variability will be difficult to forecast and may require a sophisticated approach to safety stock planning.  Low volume, high variability SKU’s may require a thoughtful postponement approach, resulting in an assemble-to-order process.  This profiling analysis is complemented nicely by a Quantitative Reasonability Range Check, which should be an on-going part of your forecasting process.

2.       Validating of qualitative forecasts from among functional groups such as sales, marketing, and finance
3.       Estimation of the magnitude of previously unmet demand
4.       Predicting underlying causal factors where necessary and appropriate through predictive analytics
5.       Development of the quantitative forecast including the determination of the following:

  • Level of aggregation
  • Correct lag
  • Frequency
  • Appropriate forecasting model(s)
  • Best settings for forecasting model parameters
  • Forecast consumption rules
  • Demand that you didn’t capture in sales or shipments (in the case of retail stockouts or fill/kill ordering in CPG)

6.      Rationalization of qualitative and quantitative forecasts and development of a consensus expectation
7.      Planning for the commercialization of new products
8.      Calculating the impact of special promotions
9.      Coordinating of demand shaping requirements with promotional activity
10.    Determination of the range and the confidence level of the expected demand
11.    Collaborating with customers on future requirements
12.    Monitoring the actual sales and adjusting the demand plan for promotions and new product introductions
13.    Identification of sources of forecast inaccuracies (e.g. sales or customer forecast bias, a change in the data that requires a different forecasting model or a different setting on an existing forecast model, a promotion or new product introduction that greatly exceeded or failed to meet expectations).

The proficiency with which an organization can anticipate market requirements has a direct and significant impact on revenue, margin and working capital, and potentially market share.  However, as an organization invests in demand planning, the gains tend to be significant in the beginning of the effort but diminishing returns are reached much more quickly than in many other process improvement efforts.

This irony should not disguise the fact that significant ongoing effort is required simply to maintain a high level of performance in demand planning, once it is achieved.

It may make sense to periodically undertake an exercise to (see #1 above) in order to determine if the results are reasonable, whether or not the inputs are properly being collected and integrated, and the potential for additional added value through improved analysis, additional collaboration, or other means.

I’ll leave you once again with a thought for the weekend – this time from Ralph Waldo Emerson:

“You cannot do a kindness too soon, for you never know how soon it will be too late.”

Thanks for stopping by and have a wonderful weekend!

Thoughts from IBF Conference

I just left the IBF’s Leadership Business Planning & Forecasting Forum and the Supply Chain Planning & Forecasting:  Best Practices Conference in Orlando, Florida.  I’ll share a few of the thoughts that struck me as helpful here in the hopes that they will help you.

From a panel discussion on organizational design at the Forum, I compiled this key point (adding in my own twist):   S&OP is all about integrated decision-making, understanding inter-related tradeoffs, driving toward bottom-line metrics with cause/effect accountability.

Rick Davis from Kellogg pointed out that  “Integrated planning is less about function than about process.

Rick also emphasized managing the inputs, particularly since data and technology are moving at the “speed of mind”.  Decision-makers need to ask themselves, “Will competitors leverage information better than I will?”

A few keys to success in S&OP include the following (see Ten Sins of S&OP for what NOT to do):

1)      Scenario analysis

2)      Leadership buy-in

3)      Quality feeder processes (my point of view)

4)      Remembering that financial targets and demand plans are different

Rafal Porzucek defined supply chain agility this way:  “The speed to react with predictable costs and service delivery.”  I thought that was pretty good.

The consumer products executives felt that the effort to leverage social media for forecasting was in the data collection phase.  In a couple of years, it may be useful for generating more accurate forecasts.

Mark Kremblewski and Rafal Porzucek from P&G made a compelling case for enabling innovation through standardization – and it made great sense.

Mark also shared a profound understanding of how the key numbers of business objective, forecast and actual shipments relate to each other.

I hope some of these points stimulate your thinking as they did mine.

There were other speakers who shared some great insights.  The absence of mention here is not meant to diminish their contribution.

This week, in the theme of anticipating the future, I leave you with the words of the English novelist and playwright, John Galsworthy, who won the 1932 Nobel Prize in Literature, “If you do not think about the future, you cannot have one.

Have a wonderful weekend!

Protected: Five Best Practices for a Demand Plan Sanity Check

This content is password protected. To view it please enter your password below:

My Thoughts from the IBF Leadership Conference in Las Vegas

Although it is not yet Friday, I want to take this opportunity to share my thoughts on what I heard at the Institute of Business Forecasting and Planning’s Leadership Business Planning and Forecasting Forum.  I was privileged to spend a couple of days with a rather distinguished group of practitioners, software vendors, academics, and consultants exploring three major areas of interest to most supply chain managers and planners – best practices in Leveraging Integrated Demand Signals, Sales and Operations Planning and Demand Planning.  I managed to leave my notes in my hotel room, but here are a few of my thoughts in no particular order that you may find immediately useful (some completely original, some borrowed, some modified from something I heard):

  • Make use of syndicated data as a leading indicator.  More and more of this is available.  Determine what is available and match it to your business needs, leveraging econometric models.
  • Collaboration is still partly a function of bargaining position.
  • Before you collaborate, make sure you have done your analytical homework so that you understand the total opportunity and how much you need to capture and how much you can afford to give away.
  • A “forecastability” or “reasonability” analysis allows demand planners to be more efficient by highlighting areas where they can engage their education, training, and experience rather than sifting through data is becoming a best practice. 
  • Two key performance indicators that might not have been in your textbook probably ought to be part of your demand planning process:

              √ Forecast Value Added (mean absolute percentage error for new forecast approach  – mean absolute percentage error for old (or possibly naive) forecast approach)

              √ Cost of Inaccuracy (margin and lost goodwill * units underforecasted less safety stock) + (cost of holding inventory * units overforecasted) all summed over the relevant time period

  • Consider engaging finance in the demand planning process.
  • Know the difference between your financial or sales objective and the demand plan.
  • Many companies struggle with the harmonizing of qualitative and quantitative forecasting.  A generally helpful concept here is that qualitative input tends to be best from a top-down  perspective and allocated down;  quantitative forecasting tends to be at a lower, if not the lowest level and rolled up.
  • Forecast both shipments and end-customer consumption and the difference.  In  the consumer goods industry, this is essentially “trade inventory”.
  • Microsoft Excel is still the predominant planning software.  It is how people and organizations innovate quickly.  However, building models in Excel, itself, is problematic in terms of scale, maintenance and process standardization.  A useful improvement would be getting IT or a consultant to create your model in Visual Basic, leveraging Excel as the user interface. 
  • Enterprise software is useful, but customers and users need to demand more from their software vendors.
  • Fit both your model and your metrics to the nature of the business and the data.

While we are on the topic, allow me to also point you to my blog post of a few days ago (September 9) where I outline some of the differences between forecasting per se and a robust demand planning process.

Careful, Comprehensive Inventory Management (Part 1)

Manufacturers and distributors usually spend most of their cash on inventory.  In fact, many service organizations like utilities and health care delivery organizations spend lots of money on materials.  But in the case of manufacturers and distributors, just look at the cost of goods sold as a proportion of sales, compared to any other item.  Given that reality, the better part of wisdom mandates a careful and comprehensive approach to managing inventory.

As a memory aid, I use A56σ to represent such a careful, comprehensive, and corporate approach to inventory management.  Each component of A56σ is essential for achieving sustainable, continuous improvement in inventory efficiency.  There are five concepts which I will alliterate with the letter “A” and the tools of six sigma.  Here is the first “A”.

Anticipate – anticipate market requirements

The more you are able to accurately anticipate the demand by your end customer in the marketplace, the more you will be able to move, make, buy and store the inventory that will sell quickly.  This may seem like a self-evident axiom, but this is not easy and the benefits of incrementally better anticipation go directly into additional revenue as well as more efficient inventory and use of cash. Large bodies of knowledge have been built around this subject from rigorous quantitative models for forecasting to methodologies for collaborative forecasting, both within an organization and across organizations.  The point of diminishing returns can be reached fairly quickly, but if you are not there, it may be your most significant leverage for improved supply chain performance.

%d bloggers like this: