Tuesday, 21 December 2010

Happy Holidays thanks to OR

At this time of the year many of my colleagues and fellow Dutchmen arrange to go ski in France or in Austria (Swiss is too expensive, you know the Dutch) or fly off to some sunny destination to enjoy the December holidays. Personally I like to stay at home and enjoy being with family and friends. It also allows me to write this blog and enter the December Blog Challenge of INFORMS.

How do Holidays and Operations Research relate? Well, a lot I would say. The obvious examples for this time of year are:


  • Given a (tight) budget which Xmas-presents to buy to maximise joy (Knapsack problem)
  • The shortest route along all the shops you want to go to for Xmas shopping or all the relatives you want to visit (Travelling salesman). Santa's route actually is that of a travelling salesman.
  • Getting all the presents you bought into your car (vehicle load optimisation)
  • Avoid gaining weight during the Xmas holidays, design a healthy Xmas menu (Diet problem)
  • And many more.
Those who don’t stay at home but book a flight, a train, a hotel room or any other holiday accommodation also come across Operations Research. It is used to determine the fee you have to pay for the flight or hotel room you book. It’s called revenue management or yield management. It explains why the person next to you paid a different fee than you and why prices for hotel rooms and flights fluctuated from day to day. The basic principle of revenue management is rather simple, try to maximise revenue given an uncertain demand. Remember selling lemonade as a child outside your house? You had to decide when to try and sell it, (it didn’t work on a rainy day) decide on a reasonable price and when/how to change the price as the day rolled on. Things are no different in selling flights or hotel rooms. There is however more to revenue management than just determine the right price.


One of my customers is in the travel industry. They offer flights, hotels and holiday accommodations or combinations thereof via the web. It is interesting to see how their planning cycle works and how Operations Research is used to support it. It all starts with having something to sell. Before you can sell, you need to have inventory (How many cans of lemonade to make?). Each year, ahead of the holiday season, it must be decided how many hotel rooms and flight seats are required to fulfil market demands or capture the market potential. This decision can have a lot of impact since, if a too small amount is bought, it is a missed opportunity for increased revenue. However if too much rooms or flights are bought, a lot of the capacity will by left unused leading to uncovered costs. Although deciding on the exact amounts is still a craftsman’s job, a lot can be learned from the past. This is where Operations Research offers a helping hand. Using advanced forecasting methods the capacity managers are able to make good esstimates for the required inventory that is than (together with strategic commercial objectives) input for the sales & booking process.


Given the above, you can imagine that the inventory risk is rather large. Huge amounts of flight seats and hotel rooms need to be bought in advance, which are perishable. If the flight seat, hotel room or accommodation isn’t sold, it expires. Also 95% of the inventory needs to be sold to have a healthy P&L, which leaves not much room to manoeuvre. This is already hard when you are selling just one product. Think about selling 18 million different products! Here is where the Operations Research comes in again. To deal with the complexity and size of the challenge, we developed a specific optimisation model that is incorporated into a revenue management system. The model is used to determine the optimal price for each of the 18 million products. On a daily basis 2-3 million product prices are updated, taking the actual bookings and updated forecasts for future demand into account.


At the introduction of the model, the mindset of the pricing managers was to set prices to sell 100% of the inventory. They achieved high occupation levels, but at the cost of many discounts. With the revenue management model this changed. First they used to model to maximise revenue by making timely price adjustments. Now they are even beyond that, using the model to find new growth opportunities. The revenue management model handles everything so they also can enjoy their holidays as well, thanks to OR. Happy Holidays to you all!

Wednesday, 8 December 2010

Statistical Reasoning for Dummies


"Statistical thinking and reasoning is necessary for efficient citizenship as the ability to read and write"

Is this statement to bold? I don’t think so. We are surrounded with statistics, uncertainties and probabilities and need to understand them, use them and make decisions with them. But, as it turns out, statistical reasoning is very difficult given the many mistakes that are made in newspapers, medical decision making, social science, gambling, politics. You name it, it’s everywhere and so are the mistakes. To give you an extreme example, in Innumeracy J.A. Paulos tells a story about a weather forecaster. The weather forecaster reports a 50% change of rain on Saturday, also a 50% chance of rain on Sunday. He concludes that it will rain the weekend for certain. More recently the publication of Stonewall stating that the average coming out age has been dropping was proven to be wrong by Ben Goldacre. The Stonewall survey is seriously flawed and proves the obvious point that people tend to get older when they get older, nothing more and nothing less. See Ben’s Bad Science weblog for more details. Yesterday a big news item on local television was that mother, son and grandson are born on the same date. Statistically it’s not that extraordinary, contrary what the journalist said (“It’s a miracle”). It’s easy to make a long list of these kinds of mistakes (the next Great Operations Research Blog Challenge theme?), but how to resolve this? Maybe some statistical reasoning for dummies could help? Let’s start with an introductory chapter, some basics.

As an Operations Researcher I am used to work with terms like probability, risk, variance, covariance, t-test, and many other statistical “Red” words as Sam Savage calls them is his book The Flaw of Averages. Many times these “Red” words are used to express a probability or risk, leading to many mistakes or confusion. Take for example the story from journalist David Duncan that was in Wired magazine a few years ago. David did a complete gene scan that checked for genetic decease markers in his DNA. Such tests will soon be part of everyday medical care (and insurance acceptance terms?). To his distress David receives the message that he has mutations in his DNA, raising his risk of having a heart attack. Such risks are expressed as the probability that you will have a heart attack is x%, a single event probability. It is similar to the statement that the probability that it will rain tomorrow is 30%. But what does it mean? Will it rain 30% of the time tomorrow, or in 30% of the country? Both inferences are wrong by the way. The problem with single event probabilities expressed in this way is that without a reference to the class of events the probability relates to, you are left in the dark as to how to interpret it. It causes Duncan to worry about having a heart attack, but should he have worries about it? A way around this confusion is to include the reference class to the probability. So, the weather forecaster should state something like that in 3 out of the 10 times he predicted rain for tomorrow, there was at least a trace of rain the next day. Much of the confusion of David could have been resolved if the doctor would have added a reference class, putting things in perspective.

Another classic misunderstanding is the interpretation of a conditional probability, like in interpreting diagnostics tests in medicine. See my earlier blog entry on that. The approach I used to explain the correct way to interpret the test results “translates” the probabilities (stated in percentages) into real numbers, making it easier to understand. Actually it does more or less the same as adding the reference class to the single event probability. It adds context. The last example of a much misunderstood statistic is relative risk. In the Netherlands there was much debate on whether girls should to be vaccinated against cervical cancer caused by the human Papilloma virus (HPV). To express the effectiveness of the vaccination, a relative risk reduction was used. Something like; “This vaccine will reduce the risk of getting cervical cancer from an HPV infection with x%”. This kind of statement is used regularly to express the effectiveness of preventive methods like screening, vaccines or other risk mitigation strategies. Using relative risk reduction as a measure can however be confusing. For example, if the number of women dying of cervical cancer reduces from 4 to 3 per 1000, the relative risk reduction is 25%. A massive risk reduction you would say. However, if you look at the actual reduction of women dying this is only 0,1% (=1/1000). The confusion, again, comes from not expressing the reference data causing many people to think that the relative risk reduction applies to those how take the vaccine, but it actually applies to those how don’t and die (25% less dead).

So first lesson in statistical reasoning is look for the reference and translate probabilities and risks into numbers. For us “professionals”, lest present our results in a smart and easy to understand way and skip using those Red words.

1) I used HG Wells’ statement on statistical reasoning from somewhere in the beginning of last century as a starting point.

Friday, 22 October 2010

Warning, Math Inside!

Next year the International Mathematical Olympiad (http://www.imo2011.nl/) will be held in the Netherlands. This calls for a celebration of math, but at present mathematical modelling seems to get the blame for all the economical problems we have. To name a few, the models from David X. Li are blamed for causing the credit crunch. The risk assessment models of banks failed; nearly tipping them over if it wasn’t for the government support they received. Also the losses incurred by the pension funds are blamed on mathematical models. In pointing the finger on who’s to blame for al this, many decision makers point to math modelling. It’s far too complicated, they say. It is however invalid to blame math for this; after all it is not the math that makes the decisions. What is to blame is the ignorance with which math models are applied. It’s like buying a state of art electronic device and getting mad at it because it doesn’t work. But if you had read the manual, things get different.

The essence of math modelling is to describe reality in mathematical terms with a specific purpose, like determining the value of an investment portfolio or assessing the risk of a project. A mathematical model always is a simplification of reality. If the model would be as complex and as detailed as reality, it would become as expensive and as difficult to use. Math allows us to focus on the essence of the question at hand and gives us the opportunity to experiment without having to perform tests in real live. No one would think of flying in a new aircraft, without first testing its ability to fly using a math model. After using the model to perform the required analysis, the results are translated back to reality. In both translations (from reality to model and back again) common sense, simplifications, assumptions and approximations are used. The model user and decision maker must be well aware of that.

A well build math model can be a very powerful instrument; you can compare it with a chainsaw. In the hands of a well trained annalist the math model will be an excellent and effective tool. In the hands of an ignorant user it can do a lot of harm, even to the user. A decision maker must know the scope, concepts and dynamics of the models used before adopting its results in decision making. This doesn’t imply that every decision maker should a well trained and skilled mathematical modeller or that the detail and complexity of the model is restricted by the decision makers’ math capabilities. An aircraft pilot for example exactly knows the scope and conditions for using the autopilot of the aircraft, this doesn’t imply that the pilot can build one himself.
Robert Merton states in his Nobel price speech in 1997:

"The mathematics of models can be applied precisely, but the models are not at all precise in their application to the complex real world. Their accuracy as useful approximations to that world varies significantly across time and place. The models should be applied in practice only tentatively, with careful assessment of their limitations in each approximation."

He is absolutely right about it, but it’s forgotten easily. Robert Merton himself failed to keep this in mind and as a result Long Term Capital Management went down in 1998.

As an operations research consultant I use math models all the time. It is tempting to keep the detail of the modelling out of the scope of the decision makers. But my experience is that developing models in close corporation with the decision maker is a far better way. When keeping the decision maker out of the loop, you need to do a lot of explaining after the model has been developed. Many times, when the results from the model are not as expected, it is the math model that gets the blame. However it is the ignorance of the decision maker that is the cause, they didn’t read the manual (or we didn’t explain the model well enough) Therefore developing the model in close corporation with the decision maker is a better way. It leads to better models because scope, assumptions and simplifications are discussed and agreed upon during the development process. This allows the decision maker to learn to use and understand the results of the model while it is being developed. No manual required! It is not only easier; it is also a better way, improving decision quality.

Sunday, 26 September 2010

What’s the best option?

Many of the decisions we make are choices between 2 or more alternatives. When the outcomes of the alternatives are known with certainty, deciding between them is relatively easy. However, since the environment we live in is fraught with uncertainty, things become more difficult. This uncertainty prevents us from making an accurate estimate of the outcome and makes the decision for the best alternative difficult. But uncertainty also offers new opportunities; new information helps to improve our decision and generate more value. In business many of the decisions that need to be made are of that kind. Take for example a manufacturer introducing a new product. It needs to decide on the number of products to make, not knowing how many customers will buy it. Should it first do a test run with the new product? This kind of decision making can benefit a lot from what Operations Research has to offer, it can support companies in answering the question: What’s the best option?

A much used technique in business in finding the best option is the net present value analysis (NPV). By comparing the discounted cash outflows (investments) and discounted cash inflows (revenues) it can be inferred whether the project will add value or not. The trouble with this approach is however that it is incapable to put a value on the uncertainty involved in the decision. To illustrate, assume a company that needs to decide to invest in a new technology that would cost them €650 million to develop and that total (discounted) revenues over the coming 5 years would be €500 million. The NPV of the project (-€150 million) would result in a negative advice to invest in the technology. But is that really the best option? In the above example the company doesn’t know for sure that the expected revenues will be equal €500 million, this depends on the number and price of the products that it will sell after investing the €650 million. The NPV analysis can only capture part of these uncertainties, for example by running a scenario analyses on a range of possible market prices and sales for the product. That way an upper and lower bound of the NPV can be estimated, but it doesn’t help incorporate the variance across the different scenarios into the decision. There is a better way to put a value on the uncertainty by applying a real options approach.

A real option approach uses option valuation techniques to value decisions. In the above example the option was whether or not to invest €650 million to earn €500 million, with much uncertainty about the expected revenues. Real option analysis uses the famous Black & Scholes option valuation model to value this decision, although other models are used as well. The B&S model takes a number of arguments. In a real option valuation the stock price (S) in de B&S model is equal to the estimated present value of the cash inflows (=revenues). The exercise price (X) is equal the present value of the cash outflow (=investments). Uncertainty (σ) is measured by taking the standard deviation of the growth rate of the future cash flows (in our example the volatility in revenue for the new product). The time to expire (t) is the period during which the option can be exercised. Dividends (δ) are the cost incurred to preserve the option. The risk free interest rate (r) is set to the yield of a riskless security (are there any nowadays?) with same maturity as the duration of the option.

Assuming volatility (σ) of 35% standard deviation and a 5 year risk free rate of 2.5% the option value becomes €129 Million, assuming no option preservation cost. That would mean that investing in the new technology really is an opportunity! The difference between the NPV value of the project and the value based on real options approach shows the value of the flexibility the company has because it can wait and invest when uncertainty on product price and sales are resolved, for example by applying market research or running a test with the new product. Note that using the B&S model assumes that revenue (=stock price) follows a lognormal distribution with a constant level of volatility. That may not be the case for the decision at hand. The lognormal distribution also causes the increase in option value when the duration is increased. In practice assuming lognormal returns is probably not valid. There is a way around it (Monte Carlo option valuation for example), I’ll discuss it in a later blog entry.


A real option approach not only offers a better way to value the uncertainty in a decision, it also provides a framework that helps identify and prioritise the key levers management can pull to increase the payoff of the opportunity. In essence all six parameters in de B&S model offer a lever to pull. It allows management to proactively deal with the uncertainty involved in the decision. For example the effect of the entry of competitors with a similar product after 2,5 years (it would cost €56 Million dropping the option value to €73 Million) or the expected increase in revenue from marketing strategies around the product. With real options your able to tell which option is best!

Saturday, 28 August 2010

On Beer, Whips and Chaos

The last couple of months the topic that pops up in many of the conversations I have is forecasting. Last week a beer brewer, this week a mail company was asking about it. Recently the Dutch financial paper had a full page on the value of forecasting for DSM NeoResins. The article (in Dutch) explains why forecasting the demand and the resulting stock impact were crucial for DSM to understand the dynamics of their supply chain and helped to manage the impact of the economic downturn. DSM experienced the classic bullwhip effect, a well known phenomenon in supply chain management, and was looking for ways to tame it.

The bullwhip effect is the amplification of demand fluctuations as one moves up the supply chain from retailer to manufacturer. It can be measured as the variance of the orders divided by the variance of demand. A bullwhip measure larger that one implies that demand fluctuations will be amplified. The bullwhip has three main causes. First there is the supply chain structure itself. The longer the lead time in the supply chain the stronger the bullwhip effect, because a longer lead time will cause more pronounced orders as a reaction to demand increases. Second cause is local optimisation. Since placing an order will involve cost there is an incentive to hold orders back and only place aggregate orders. Last but not least is the lack of information of actual demand. Without actual demand data one has to rely on forecasted demand. When applied without thought, forecasting will aggravated the bullwhip effect, leading to forecasted chaos.

To illustrate the impact of a “nonoptimal” forecasting method picture a simple supply chain, for example the supply chain from the Beer Distribution Game. Each point in time the following sequence of events takes place in each part of the supply chain. Incoming shipments from an upstream decision maker are received and placed in inventory. Next incoming orders from the downstream decision maker are received and are either fulfilled (when stock levels suffice) or backlogged. Last, a new order is placed and passed to the upstream echelon. Inventory is reviewed each time period. In deciding the order quantity we have to estimate future demand. To be able to forecast we need to have a forecast method. The figure below shows the results of a Moving Average (MA) and Exponential Smoothing (ES) method to forecast future demand. Compared to the actual demand one can clearly see that ES forecaster gives better results.


To illustrate the impact of forecasting on stocks and order quantity, assume that it takes 3 periods before an order will be received. This needs to be taken into account when placing an order. The below figure shows actual demand compared with the order quantity based on MA and ES demand forecasts. As can be seen clearly the fluctuation in demand is amplified in the orders, illustrating the bullwhip effect.


The amplification for the ES forecasts is about twice the amplification from the MA demands. So although SE gives better forecasts it amplifies the fluctuations in the demand more than the simple MA forecasting method in this case. Not something you would expect. When comparing net stock with actual demand a similar picture arises.



So, when attempting to forecast be aware of the chaos it can create. Improper forecasting may have a devastating impact on the bull whip effect. As a consequence inventory cost will increase and customer service will be impacted significantly. Best way to start is by selecting a few forecasting methods and order policies and test the accuracy of the forecasts on past periods. Using the mean squared error of the forecasts as a goodness of forecast measure the best forecast method can be selected. One can also use the bullwhip effect measure in selecting the forecast method.
PS: If you would like to have the data from the above examples, just drop me a note.

Sunday, 11 July 2010

Operations Research; key factor in competing for the future

Prahalad’s message in Competing For The Future is clear. Identify, create and then dominate emerging market opportunities before your competitors have the opportunity to exploit them. Applying analytics is suggested to be the best way to achieve that. Many books have been written on how analytics will bring advantages to companies in beating their competition to market dominance. Davenport’s new book Analytics at Work (I wonder where he got the title ;-) ) is another one, it will probably sell big. These books hook up on the global trend that more and more reliable data is becoming available through the use of IT systems. Companies keep track of their business in much more detail, from order to delivery, leading to ever growing piles of data. Mining these mountains of data with computerized analysis, creates new information and insights leading to more fact based decision making. To my opinion these “analytics” books have too much focus on data analysis alone. They focus too little on actually using analytics to learn from past experiences, identify new relationships and push the innovative power of companies to reinvent their business time and time again. Helping them stay ahead of competition. To accomplish that, more is required than just a few data dashboards or regression lines. It’s mathematics or in its applied form Operations Research and the way of thinking that goes with it.



The added value of mathematics is no longer, as the famous mathematician G.H. Hardy in his essay A Mathematician’s Apology, its lack of applications in the outside world. Mathematics has proven to add value over and over again and is not something that came to rise in the past few years with the Analytics trend. Hardy wrote the essay in the same period as Operations Research came into existence at the Bawdsey Research Station in the UK where mathematics were used as a means to analyse and improve the working of the UK's early warning radar system, Chain Home. With Operations Research the limitations of the Chain Home network were exposed and improved allowing for accurate early warning and remedial actions; a turning point in the battle for Britain.



When we look at recent successes in business, we time and time again conclude that the driving power behind these successes is applied mathematics and optimisation. Take Google for example, its key asset is a mathematical algorithm. Another example is Tesco which is the first non American based retailer to be successful in the States. Key in the success is their ability to analyse the buying habits of their customers and to apply this knowledge in targeting customer with specific special offers. Likewise, Albert Hein uses information from buying habits to manage its complete supply chain from sourcing decisions all the way down to managing the stock levels in the shops. These examples show that the gathering of data is not enough to create value from it. Success comes when a company is capable of analysing the acquired data, learns from it, optimises its current operation and ultimately reinvents its business. This can only be achieved when a company makes mathematics and optimisation a core competence. In a recent publication Alexander Rinnooy Kan, chairman of the Social and Economic Council of the Netherlands and well-known mathematician and operations researcher, even goes a little bit further. He signals that not that many (applied) mathematicians are present in the boardrooms of companies in the Netherlands. I expect that to be the case elsewhere. However the ability that applied mathematicians have to structure and solve complex problems could provide the companies the key to competing for the future, he states.



I totally agree. Many companies have invested in IT systems to capture and generate more and more data on their operational processes. These investments lead to vast amounts of data but to their disappointment didn’t improve decision making. The available data, the numerous ways to display and analyse it and the many mistakes that be can made doing this leave the decision makers in the dark. The additional information doesn’t make life easier for them; it creates an ever growing chaos of reports and opinions. It is the capability of applied mathematicians (Operations Researchers or Econometricians which ever term you like better) to create order in this chaos; applying logic and optimisation techniques to guide the decision maker to make the best decision and stay ahead of the competition. Not only should these capabilities be present in the support units of a company, they should be present in the boardroom as well.

Sunday, 13 June 2010

Zen and the art of strategic decision making

Formal models are still rarely used in strategic decision making. In the past few decades a lot of effort has been put into the development of more and more sophisticated models and methods to capture the complexity of decisions boardrooms have to make. Many of the available techniques are well suited for these kinds of questions. Think of Monte Carlo Simulation, game theory or option valuation. When applied correctly these techniques create insight, make decisions fact based and improve the overall quality of the decision. When talking to members of the board on this subject their response many times is that formal models are too complex or not complex enough to support the decisions they need to make. Even if the models could support the members of the board, their feeling is that gathering and validating of the data and analysis with the model would take too much time and is too complex to really support them. They trust their intuition or use analogies instead. It is my experience that it is a misconception; the “way” to go is to combine intuition and formal models into the art of strategic decision making.


The value from using formal models is that they provide a systematic framework that structures the identification of business rules, the gathering of data and most importantly the discussion in the boardroom on the key value drivers of the decision. Which of those drivers can be influenced by the board and which can’t they? How do these drivers influence the decision? To give you an example, government regulations can have an enormous impact on the strategy of a company but they can’t be influenced much. Using a mathematical model can provide insight on the impact of changes of the government policies and supports the development of strategies on each of the possible outcomes. That way the board is prepared for changes and can get things into gear as soon as policies change which gives them a head start over their competitors. There are many other factors that can be treated this way, like competitors, customers or uncertainties in the economy.


The most value is created with formal models when they are build and used interactively in the boardroom. While discussing the structures, business rules and key parameters of the formal model the board members are stimulated to think and discuss about what drives their business. They will touch upon the core strategic issues and find out how much they know about them and what are the blind spots. The collective effort of building the model will stimulate the common understanding of the drivers of the decision to make. Also since all understand the structures, business rules and key parameters that will drive the model and their interaction it will build trust in the model outcomes and ease communication and implementation of the strategic choices later on.


So using formal models is heaven? That would be too easy. Models are not and should not be the deciding factor when making strategic decisions. I have learned some important lessons from using models in boardrooms. One of them is to start managing expectations on the value of using a formal model from day one. One such experience is that the formal model is treated as some kind of black box supplying flawless predictions about the future. Using formal models will not solve the uncertainty involved in boardroom decision making. Making the boardroom members aware of that is an important first step. One other much encountered expectation is that the model is some kind of Delphi oracle, using it will provide all the answers, and this is of course also not true (unfortunately). Even worse perhaps is when the board falls in love with the model and uses it to prove the convictions of the board. Than the model is “tortured” until it provides the answer the board is after.


Strategic decision making requires, like Prisig in his well known book Zen and the Art of Motorcycle Maintenance explains, combining the rational and the irrational. It requires the creativity, intuition and analogies from the board members. With Operations Research the thinking process is structured, rationalizing the decision making process. So, formal modelling and Intuition must coexist in strategic decision making, increasing the overall quality of the decision.

Thursday, 13 May 2010

What’s the value of information?

Every weekday I take my car to either drive to the office or visit a customer. Sometimes I work at home first, especial on Tuesdays or Thursdays. On these weekdays the express ways around the village I live are jammed with traffic, but not always. Of course working at home is great and fits the current way of thinking on how to best mix work and social responsibilities. Working at home allows me take my kids to school, while the internet let’s me stay in touch with my colleagues. However, discussions in front off the whiteboard are still hard using web communications. Nothing beats sitting together in the same room to digest and solve the challenges of a customer. So on a Tuesday with a meeting scheduled at 8:30, what should I do? Take my chances and take my kids to school and hope that I’ll arrive in time at the office? Or should I leave early and drop my kids of at the neighbours? Of course this trade-off depends on what happens if I miss the meeting in the office and how many times I asked my neighbours to take my kids to school already. Can some analytics support me in this?

What helps is to structure this decision and pinpoint the uncertainties. The decision I need to make is whether to take the kids to school or leave home early. If I take the kids to school I enjoy the 40 minutes in the car in which we can talk on all kinds of things. When, in that case, there are traffic jams I will be late in the office. Let’s assume that there is 60% chance of a traffic jam causing me to be late by 60 minutes. I need to catch up that time which will cost me an additional 60 minutes. The other option is to leave home early. I won’t enjoy the 40 minutes in the car with the kids and in addition to that, I will have to take my and the neighbours’ kids to soccer practice next Saturday which will cost me 50 minutes in total. When there is a traffic jam I will make the meeting on time, because I left home early. We’ll have an effective meeting saving me 60 minutes that day. When there is no traffic jam however I’ll be much too early at the office and will have to wait for 30 minutes for the guard to open the doors, costing me 80 minutes in total. Based on the assumed 60% chance of a traffic jam the best option is to take the kids to school. The value of the decision is 4 minutes.





Since I do not travel to the office on all Tuesdays, the 60% chance of a traffic jam is just a guess. One way to be certain about it is to do some research on the internet and find out about the frequency of traffic jams on the express ways. Since time is scarce, how much time would I be willing to invest in that? In decision analysis this is called the Expected Value of Perfect Information (EVPI). It is easy to calculate. In this case there is a 60% change of a traffic jam causing me to leave home early. It will cost me 50 minutes next Saturday but 60 minutes of efficiency gain today, net 10 minutes. There is 40% change that I can take my kids to school and enjoy the 40 minutes in the car with them. So in case of perfect information the value of my decision is 60%*10 + 40%*40 or 22 minutes. The EVPI therefore is 22-4 or 18 minutes. So when I am able to find the correct chance of a traffic jam in less than 18 minutes I can improve the value of my decision.


But you know the internet, “you never know what you’re gonna get”. Let’s assume that the internet fails you 25% of the time in predicting a traffic jam. This works in two ways. It could indicate that there is a traffic jam but there isn’t any, or it indicates that there is no traffic jam but there is one. The expected value of the decision to leave home early or not now becomes 5.5 resulting in an expected value of imperfect information (EVIPI) to be equal to 1.5 (5.5-4) minutes. Note that using the internet only improves the value of your decision when it’s ability to estimate a traffic jam or not rises above 72%.





The EVPI en EVIPI measures are not only useful in everyday decisions; they can be used in any business decision. By computing the value (perfect or not) on each uncertainty in a business decision, a trade off can be made on the cost of gathering additional information and the impact it will have on the total value of the decision. It is a perfect guide to decide on how to best use our scarce resources like time and improve the overall value of the decision.

Sunday, 11 April 2010

Risk Analysis Placebo’s

Carbon Capture and Storage (CCS) has much attention in the Netherlands. It is thought of as one of the ways to meet the climate targets. It’s the Dutch government’s intention to strive for a 100% sustainable energy system. For the Dutch government, CCS offers a solution for the period of transition from fossil fuels to sustainable sources of energy. The government wants to start a CCS pilot project in Barendrecht in which CO2 is stored in an empty gas field. A mandatory risk assessment for the Barendrecht project claims CCS to be safe. Risks of a sudden blow-out of concentrated CO2 would be minimal, with the chance of a mortal accident of less than 1 in a million years. I took a closer look at risk analysis method that was applied and found that is incomplete and unfounded. The calculated risks are underestimated leading to a false belief that it is save to commence with the project. The flaws I found do not only apply to the risk assessment of the Barendrecht project but to many risk assessments of storage or transport of dangerous goods. These risk assessments are therefore Risk Analysis Placebo’s.

The flaws I found are numerous. One of the most important flaws concerns the validation of data used in estimating probabilities like failure rates. In assessing risks it is required to know the failure rates of for example pipelines, compressors, etc. In the Netherlands these parameters are prescribed by law. It is known that these standards are not verified often enough. Also knowledge from accidents is not used to update them. This already is bad practice from a risk management perspective. For the risk assessment of CCS it is even worse. The failure rates for the CO2 pipelines were derived from failure rates of other installations; they were not verified at all. Since CCS is a new technology this is quite questionable. Would you launch a space shuttle for the first time, using the model of a sputnik for test runs?

Another important flaw I found concerns the model used. Besides the standardized parameters, the model to be used for a risk assessment is also prescribed by law in the Netherlands. I’m still wondering why a one size fits all approach would be sufficient in risk assessments. The prescribed model simulates the cloud of the dangerous gas or substance after release under different circumstances. Given this estimated cloud the concentration of the substance can be calculated at different distances from the source. Risk is then calculated at each location by multiplying the estimated number of casualties with the probability of the event. The prescribed model has several flaws. First of all it is not capable of dealing with buildings. It assumes that the area for which the risks are assessed is completely flat. Also it is not capable to deal with wind speeds lower that 1,5m/s. In case of CO2 no wind is the most dangerous scenario as the Monchengladbach incident in 2008 has shown. To my opinion the most important shortcoming is that for many parameters in the model an average estimate is used (see also What’s wrong with average). To name a few, an average is used for failure rates, population density, pressure in the pipeline, amount of substance released, wind speed, temperature and diameter of the leakage. As Sam Savage explains very well, using an average can be a great mistake. Since the model for the mandatory risk assessment for sure is not linear the strong form of the Flaw of Averages (=Jensen’s Inequality) applies. In such a case no average input (=point estimate) must be used but a simulation must be run using the complete probability distribution of each of the parameters to get the true average output of the model and therefore the correct risk measures.

The above flaws in the risk assessment method require change as soon as possible. More sound risk assessment methods are needed, the current one feeds risk analysis placebo’s to the public and the decision makers. The current by law standardized risk assessments in the Netherlands have little value for estimating risks, let alone developing sound risk mitigation strategies. Risk analysis goes beyond plugging a few numbers into a model. It requires thorough knowledge of the situation being assessed, tested and validated models and validated data. Work of experts, not only in engineering in risk analyses as well!


For Dutch readers: A news item for television was made out of my analysis, see Netwerk.

Sunday, 28 March 2010

On Her Majesty’s safety

Last year the Dutch were shocked by the events during the annual celebration of the Queens birthday on April 30th. During the celebration in Apeldoorn which was visited by the Queen and her family, a car ploughed into a crowd killing five people, wounding 12. The 38-year-old driver was targeting the royal family, his attempt failed. In the months after the event it was investigated why this attack wasn’t detected by the extensive risk assessments of the various security agencies involved and why the risk mitigation strategies where not effective enough to either prohibit the attack or at least safeguard the Queen, her family and the people taken part in the celebrations. To my opinion one of the reasons is the method used to do the risk assessments. While reviewing the reports on the risk analysis and the identified risk mitigations of the event I came across an old friend (or is it fo?), the well known risk matrix. The method is used a lot, but this doesn’t imply that it is an effective risk analysis method, let a lone a sound basis for developing risk mitigation strategies. Here is why.


A risk matrix is a table that has several categories of probability and impact. Each cell of the matrix is associated with a recommended risk mitigation strategy. The matrix calculates risk as the product of probability and impact. Note that risk is not a measured but a derived attribute (Risk=Probability*Impact). The cells in the matrix are coloured to indicate the severity of the risks. Red for the highest risks, green of blue for the lowest risks. The matrix offers an easy to use and straightforward way to organise pre-listed scenarios in terms of risk. Its use has spread through many areas of applied risk management consulting and practice. It has even become part of national and international standards. Also it used at the Dutch Ministry of the Interior that is responsible for the Queen’s safety and at the National Coordinator for Counterterrorism which is responsible for policy development and coordinating anti-terrorist security measures. Using a risk matrix doesn’t require any training to explain or apply it. It looks nice with its intuitive colouring. Some people really make an effort in developing wonderful colourings to impress, but this cannot hide what’s wrong with risk matrices as recent research by Tony Cox proves.


In short there are four major shortcomings. First of al the risk matrix uses ordinal scales, for example the probability of a scenario is Extreme, High, Medium, Low or Negligible. The consequence of using ordinal scales is that the scenario’s can only be arranged in either increasing or decreasing order of probability or impact. It is impossible to say that a scenario is twice the probability of another scenario, how than can one decide on the risk of the scenario and decide on the right mitigation strategy? Compare this with the ratings of restaurant or movies. Using the star rating you can decide which restaurant to go to, where dinner at a 4 star restaurant is surly better that at a 1 star restaurant. 4 meals in a 1 star restaurant won’t make up for a dinner in a 4 star restaurant however; these ratings cannot be added or multiplied.

Second shortcoming is the low accuracy of the scales. When 4 categories of probability are used each category takes 25% of the total scale, this means that scenario’s with probability of 51% are in the same category as scenario’s with probability 74%. Quite a loss of detail, while risk management is about details! Third and very serious shortcoming is the assumption that scenarios are independent. Consequence is that in case of correlated scenarios the joined risk of the scenarios is ignored. When the scenario “Attack on the Queen’s bus” was combined with the scenario “Car ploughing into a crowd, breaking the barriers” maybe the April 30th attack could have been prohibited. Finally the risk matrix results in an inconsistent ordering of risks. Scenarios with an equal risk profile are placed into different cells of the matrix leading to different risk mitigations. Very serious shortcoming I would say. To explain see the figure. In the risk matrix curves are shown which connect all possible scenarios with equal risk. As you will see, the curves run through different coloured cells which should not happen, leading to inconsistent ordering of risks and therefore different risk mitigation for the same risks.

A doctor takes the Hippocratic Oath promising to not hurt his patient while applying treatment. The same should apply for risk analysis methods. Be sure that the method applied is effective and really reduces risk (prove it!). Given the above shortcomings it will be clear that risk matrixes will introduce risk instead of reducing it. Be well aware of that next time when you encounter one. I would suggest switching to another risk analysis method, one that is consistent and fact based.

Saturday, 27 February 2010

What’s wrong with Average?

“I'm average looking and I'm average inside. I'm an average lover and I live in an average place. You wouldn't know me if you met me face to face. I'm just your average guy.”

These few lines taken from a song by Lou Reed are about being average. We use averages all the time. To name a few, your IQ-score, the batting average of your favourite baseball player, the average time to complete a project. When creating next year’s business plan the expected future revenue is used, the expected budget requirements also. In a cross dock the average productivity of the floor workers is used to estimate the processing times to unload or load the trucks. Plans based on averages however are below projection, behind schedule and beyond budget. That is because of Jensen’s inequality. When using averages for input of the plans or models, we expect to get the average outcome. That holds in only a few cases in practice. The numbers that we average are actually random numbers, for example the duration of unloading a truck. In plans we do not use all possible outcomes of the random variable but use the average for simplicity instead, not realizing that we are possibly making a big mistake. Sam Savage calls it The Flaw of Averages. One of the examples he uses to explain the flaw is the drunk on the highway. The state of the drunk at his average position is alive, but the average state of the drunk is dead.

More and more companies and government agencies are aware of the value that analytics can bring them. With the availability of Excel doing your own bit of analytics is easy. I agree, Excel is helpful, but like any other tool in inexperienced hands, things can go wrong easily, especially when averages come to play. It is also an area where I come across the flaw of averages a lot. To give you an example, picture a cross dock operation. Trucks will arrive at a certain moment in time. The trucks will be unloaded; the material in the truck will be processed (like sorting or repacking) and loaded on trucks again leaving towards the next destination. The question that the cross dock manager has is how many floor workers he needs to process all the goods passing thru in such a way that all trucks can leave on time again. He start’s up Excel and puts in the scheduled arrival and departure times of the trucks. That was the easy part, although he knows that the trucks from the north arrive on average 10 minutes late. Next question to answer is how much time is required to offload or load a truck. Actually there are two questions; first the volume on the truck is required, next to it the number of items a floor worker can take out, sort and put into the next truck per hour. The cross dock manager does some fact finding and comes up with an average volume per truck and an average productivity for the floor workers. Using the averages he calculates the amount of floor workers required, easy.

After a few weeks the cross dock manager is not that satisfied. His customers complain that the trucks are leaving to late half the time, suggesting that that there are not enough floor workers. The manager has no clue on what is wrong. When he goes out to verify the numbers he used to calculate the number of floor workers, he finds the same averages on volume and productivity. Surely he has become a victim of the flaw of averages. To see why, for simplicity assume that all the trucks arrive at the same time and leave at the same time. Also assume a sort window (the time between trucks arriving and departing) of 1 hour. Also assume that the volume in the trucks is fixed at 100 items. What is left is the productivity of the workers. Assume that a worker can on average unload, sort and load 10 items per hour. Then, on average, 50 workers are required to process the freight of 5 trucks.
Now think of this, the floor workers productivity is on average 50, but not all floor workers will have that productivity all the time. Some will work faster, others slower, due to many reasons. When a floor worker has a lower productivity he will need more time to process the items from the truck and therefore the truck will leave behind schedule. A small Monte Carlo experiment shows the impact of a 10% variation of productivity.

As you can see, the required number of floor workers runs from 40 up to 75. The average number required workers is still 50, but the variation in productivity results in shortages nearly half the time, resulting in unsatisfied customers (and the manager’s headache). To help him a little assuring that in 95% of time he has enough floor workers, he should hire 60 workers. I left out the actual arrival times of the trucks and the actual number of items on the trucks, these will complicate the analysis, but with the use of Monte Carlo analysis these dynamics can be modelled with ease. So instead of using just an average, use all the information you have. This will lead to better plans, and fewer headaches.


Nothing wrong with being an average floor worker, but plans based on average assumptions are wrong on average!

Sunday, 31 January 2010

Risky Business


No, this is not yet another review of Tom Cruise’s breakthrough movie. It is about how decision makers in governments and business make their decisions, especially when the dynamics of the challenge they are facing is misunderstood. Understanding the dynamics of a system requires mastery of concepts like stocks, flows, delays, nonlinearities, feedback loops and uncertainty. Lack of understanding these concepts causes decision makers to make the wrong decisions. With the use of simple and easy to grasp models an analytical consultant can help government and business to make better decisions supporting them to get better understanding of real world challenges.

The concept of stocks and flows is often misunderstood, leading to hard to resolve problems in controlling the performance of supply chains. An example of such lack of control is the bullwhip effect which describes the magnification of fluctuating demand in a supply chain. A very simple example already shows how complex and counterintuitive flows and stocks can get. Let’s take a simple inventory model, in this case a bathtub with water. In the example the bathtub is first filled with water at a varying rate starting from 100 litres/minute reduced to 0 and than up again (see illustration). Next, the bathtub is emptied using the same flow rate profile. Question is how much water will be in the bathtub overtime? Can you draw the picture? What will be the maximum water in the tub? Sent me your answers, I am curious on what you can come up with. Many people draw a picture mimicking the saw tooth profile of the flow rates and use it to estimate the size of the bathtub to contain all the water. As a result they will underestimate it. In practice this would mean that for example the required stock space will be estimated to small or the wrong counter measures are taken to reduce for example carbon emissions to meet certain maximum levels.

The importance of understanding the dynamics of a decision even becomes more important when uncertainty is introduced. To return to our example, assume the rate at which water enters the bathtub now is uncertain and the above figure depict the average rate at which water enters or leaves the bathtub. For simplicity assume that the actual rate is 10 litres/minute higher or lower than average with a 50% change. Now try to estimate what the water level at the end of the time period. What is the chance that it will be less than 100 litres? What will be the change of the bath to overrun if the maximum capacity of the bath is 510 litres and the starting level 100 litres?

In business and government many times average rates are used to cover this kind of challenges. Using that same approach, the average total amount of water flowing into the bath will be 400 litres. With a starting level of a 100 litres the bath will never overrun, so the probability of overruns is 0. Using the same kind of reasoning the water level at the end of the experiment will on average be 100 litres. I have encountered this kind of reasoning a lot and it is wrong, very wrong. The correct answer requires some more work that just adding up averages, but it happens all the time. Together with misunderstanding the dynamics of the system this leads to very poor decisions.


The answer is to apply a mathematical model; in this case a simple model can show you what will happen to the water level in the bath tub. Using a small Monte Carlo model helps you include the uncertainty on the actual rate at which the water enters or leaves the bath tub. The MC model creates insight into the influence of the uncertainty on the flow rates and also warns you about the spread of the water level. The model will tell you that the chance of overrun is arround 25% and that the change of less than 100 litres is arround 50%. Did you guess that upfront? A mathematical model or more general an analytical approach therefore gives you the opportunity to make better decision like buying a bigger bath or a tap that you can control better, otherwise even managing the water level in your bath becomes risky business.