Friday, 26 December 2008

Forecasting; Witchcraft or Science?

At years end, we start thinking about next year. What will it bring us? How will our lives look like in one year’s time? Some people visit a palm reader or fortune-teller, maybe even a ghost whisperer to learn about the future. Others, like Albert Einstein, never think of the future, because it will come fast enough. To look into the future we need tools other than Tarot cards or tea leaves or people like Sybill Trelawney. But are these tools good enough?

Everybody knows that no one can accurately predict the future, at least that’s my view. However, companies depend on accurate forecasts to survive. In the current economic decline, forecasts about how long and deep the decline will be are required to take the appropriate measures. In the Netherlands, several companies that depend heavily on the tendency of the market, like ASML, TNT Express, Corus, need to act quickly in order to survive. They have to (temporarily) lay off people, close down plants or reduce costs very fast. Getting this right reduces the impact of the measures that need to be taken, also it reduces the probability that too harsh measures are applied, setting back the company’s ability to take advantage when the economy booms again. Can Operations Research be of any help?

In forecasting, Operations Research offers techniques that can offer some assistance. These techniques come from the area of Econometrics and have a statistical background. They can be divided into two groups, time series analysis and regression analysis. In time series analysis a formal pattern in a sequence of observations is identified and used to predict future values. In regression analysis a formal relationship is determined between a dependent variable and one or more explanatory variables or predictors. Note that both assume that the pattern that was identified prevails in the future. A lot of research has been done in developing and improving these statistical techniques. Major problem is that these techniques require enough reliable data and should be applied by specialists.

I have encountered many examples in projects in which time series or regression analysis was applied without a sound theoretical fundament, leading to ill defined relations and therefore bad forecasts. One of the examples was in the travel & leisure business. In that particular project statistical techniques were used to identify factors that determined the demand for hotels and residential leisure parks. The idea was to use these factors to predict the future demand, based on which the price was determined to maximise the firm’s revenue. The error in the approach was that to forecast future demand, each of the identified predictors of demand needed to be forecasted. No data to support that was available. So, instead of forecasting the future demand alone, the company ended up with having to forecast several unknowns.

In forecasting I have obtained the best results when combining the statistical techniques with judgemental methods. Judgemental methods are subjective, but allow you to incorporate intuition, expert opinion and experience (so still a bit of palm-reading is required). By interviewing experts in different areas of expertise a qualitative vision on the future can be developed. Combining these views with what has been identified with statistical techniques leads to better forecasts. When constructing different (expert) views of the future, scenario analysis helps you to even better determine the strategy for the future. This way you can support companies to identify the best way forward, but also show them what the future can possible bring them (not just a point estimate, but a range of possibilities) improving the quality of the strategy.

Sunday, 23 November 2008

Of black sheep and black swans

The credit crisis is spreading more and more. Banks will tumble over without governmental support, companies like General Motors and Toyota have to layoff people because they are not selling enough new cars and pension funds have to apply serious cutbacks in order to keep a healthy balance between assets and liabilities. It has become commonly accepted that the cause of the credit crisis lies in the introduction of complex financial product that have been constructed in the backrooms of banks by financial mathematicians and econometricians. So it is all to blame on math? Math has become a black sheep, a scapegoat to blame the credit crisis on. But math doesn’t introduce new financial products, it is the bankers themselves. Moreover it was not the math that caused the prices of houses to drop, causing the sub prime mortgages to default. Bankers didn’t anticipate on a significant drop in real estate prices, a rare but disastrous event. They didn’t (want to) see that black swan.

Math is the mother of many sciences, it is very important in our profession, also in the financial markets. I have spent several years in applying math in modelling financial products in optimisation models for pension funds and insurers. Aim of the models was to analyse strategies in funding and investments against the uncertain future. In most cases Monte Carlo simulation was used, to generate as many as possible future scenarios and find the most robust investment and funding strategy. Math gives you the possibility to describe a financial product in a handy, accurate, objective and quantitative way, giving insight it its behaviour under different economic circumstances. Key is of course to think of the circumstances you want to analyse, as in any optimisation project. This requires an open mind, not one pre-occupied with making money.

The core of the credit crisis lies in supplying mortgages to families that normally wouldn’t get one. By adjusting the conditions of a mortgage (introducing the sub prime mortgage), banks in the US created a possibility to increase the number of mortgages sold. When the family no longer could fulfil the payments on the mortgage, the banks would simply sell the house. In selling the house they expected to make a profit, because of ever increasing prices of real estate. There is no math involved in that process, just a focus on making money, whatever you may think of that. To even more increase revenues (and profit) derivatives were created to split up the sub prime mortgage portfolios, selling it to other banks and investors, spreading the risk across the world. This created a setting compared to Domino day. Again no math involved here, the only driver here is money.

The first domino’s started to tumble when banks in the US were only able to sell the real estate of defaulted sub prime mortgages with substantial losses. A situation that was impossible according to the bankers. This is a typical example of a black swan as has been described by Nassim Nicholas Taleb. As he states: “Banks hire dull people and train them to be even more dull. If they look conservative, it's only because their loans go bust on rare, very rare occasions. But (...) bankers are not conservative at all. They are just phenomenally skilled at self-deception by burying the possibility of a large, devastating loss under the rug”

So it is wrong to blame math for the credit crisis. Math is a powerful tool that supports us in many ways. It helps us in optimising logistic chains, schedule manpower, build telecom networks, search Mars for water, organise humanitarian support and much more. When the bankers had used it in a proper way, maybe they would have been able to identify the risks involved when introducing sub prime mortgages.

Sunday, 19 October 2008

Operations Research Economics

I seem to be one of the OR-bloggers that didn’t attend the INFORMS DC meeting. Sorry for that, I would have liked to meet all that did attend but I was too busy. With the current economical developments in which governments spent billions to save banks from falling over, also my clients call me in distress and ask for support. This seems weird since, in times of economic distress, companies tend to reduce spending money on external consulting, but it seems that OR consulting is the one type that companies much like to spend money on even when things get rough.

When the economic mills slow down or even stop, companies face a lot of difficult questions. One of the most important ones is how to keep business profitable and shareholders happy; cost cutting is thought to be the best option because it is a powerful instrument to still make a profit with deceasing revenues. But cost cutting is also very destructive; it will destroy future capabilities and can ruin lives of employees. In times of economic prosperity the focus on costs, building a lean and mean operations, is not the first priority of management. That results in a lot of fat in the origination that can be removed by operating more efficiently. The challenge for the OR professional is to find that the best optimisation diet to revive profitability with the lowest impact on the operation.

In the past years I have been consulted by many different companies to support them in deciding how to best address the consequences of an economic slow down. They wanted to know were to close down factories, which products to stop offering, how to be more cost efficient, even who to lay-off. The last one certainly wasn’t easy. As an OR professional you can’t decide what is the best action to take; that is the responsibility of the company itself. In situations in which I was asked to suggest the best decision, I always tell the company that I can show them the impact of the various options, but they have to make the decision.

Most of the times the questions raised have to be solved on a very short notice. The stock market annalists are waiting for action and they want it fast. For all stock market listed companies this means less than three months of time to get results, since every quarter an update to the shareholders must be given on the financial soundness of the company and the plans for the future. The projects executed to regain profitability are of high strategic importance and were executed in or very close to the boardroom. This brings operations research professionals were they should be, at boardroom level. Many times I was surprised to see that CEO’s and the like know so little of the added value an Operations research professional can offer them.

So, even though the economy is coming to a stand still, I still am very busy. A job in the OR consulting business therefore is probably the best hedge against unemployment. In times of prosperity companies hire you because the have money to spent to work on ideas they have on expanding their business. In times of economic distress they also hire you, to find out the best strategy to keep them in business.

Sunday, 14 September 2008

Planes, Trains and Express services

Resource schedules play an important role in everyday life. Simply think of the airline schedules and bus and train schedules. This year the Franz Edelman award went to the Dutch Railways for their accomplishments in improving the schedules of the commuter rails system of the Netherlands. The Dutch Railways got the price for applying Operations Research to construct an improved timetable. As a result, the percentage of trains arriving within 3 minutes of the scheduled time increased from 84.8 % in 2006 to 87.0 % in 2007. This may not seem much, but it is a great achievement since the Dutch railway system is one of the worlds busiest. Even the public opinion changed because of the new schedules. The number of jokes on the trains arriving on time dropped significantly (now how can I proof that?)
Schedules like the ones used in railways are also present in other areas, for example in the Express market. Since Express services are all about on time delivery, they face the same challenge as the Dutch railways. In designing their schedules a much used approach is to start with an estimation of the amount of freight (parcels) that have to be moved between each origin and destination combination in the network. Based on the forecasted volumes the required schedules are than developed. Designing the network is not an easy task (see As you can imagine the data and parameters used in such a case are not fixed. For example think of vehicle capacities. How many parcels could a vehicle carry? This depends on the composition of parcels that have to be transported. These can be either bulky but light, but also compact and heavy (like a machine or engine). Normally a certain gross capacity is assumed for the vehicle, talking into account the freight profile of the past, assuming that that will be the same in the future.

A main cost driver in an Express network is the line haul cost, in road networks as well as air based networks. A network schedules consists of movements which designate the time of departure from the origin and the time of arrival at the destination and vehicle type. For trains it is exactly the same. Getting the vehicle type wrong (either to large or to big) is not efficient from a cost perspective. Since the Express market is a highly competitive one, you cannot effort to loose money that way. However, changing the vehicle in accordance with the actual volumes, most of the time, is impossible, as is the case with trains. Having a good estimate of the amount and composition of the parcels is therefore a must.

Obviously the amount and composition of parcels is not static but random over time. There are some characteristics of the time series that will repeat themselves, like the increase in number of parcels during the Christmas period. But estimating the amount and composition on a daily basis is really hard, if not impossible. How to address this issue than, because we still need reliable volume forecasts to construct the network. Trying to model the stochastic nature of the time series and incorporating it in the model to find the best line haul schedule will clearly complicate that model immensely. That under the assumption that an econometric model can be fitted to model both number, volume and weight of the parcels for each of the origin and destination combinations to be served by the network.

My way to deal with this random nature is to except it in designing the network and go for the average or some percentage of that average volume (say 90% or 110% of the average). Based on this static forecast the network can be designed. This works fine in practice, also because many times my customers are not able to supply my with the data to estimate a volume forecast model. To improve on this we do a sensitivity analysis based on volumes used using scenario’s that are discusses and approved by the customers.

Sunday, 27 July 2008

Tactical Manpower Planning

Manpower, for many companies, is their most important source of production capacity. Simply look at healthcare, mail delivery or the retail business. According to the Fortune 500, Wal-mart is the world’s biggest employer, employing over 2 million people. Number 4 is US postal. Both are examples of industries where the demand for manpower will vary over time.
Identifying the best staffing levels to meet the requested manpower at each moment in time is a challenge for them, one that can be addressed effectively with the use of Operation Research. I have performed various projects on this, in a wide variety of industries. In this blog entry I will explain how I addressed this challenge. I will focus on the question of how to determine the staffing levels; in a later entry I will address the challenge of generating good shift schedules.

First step in identifying the right staffing levels is to have a good estimate on the required manpower. As mentioned before, the required manpower varies over time, sometimes due to seasonal influences, like in agriculture or for airlines, but it can also vary on a very short term, like in call centres. A clear understanding of how the work is organized helps in identifying the right levels of staffing. In the airline business for example most of the activities performed are depended on the schedule of the aircraft. At a hub of an Express company the activities to be performed are depended on the arrival and departure of trucks or aircraft, also at railways, bus companies, etc. Focussing on the airline example, when an aircraft arrives at the airport all kinds of activities need to be performed before it can depart again. It needs to have a technical check up (to make sure it doesn’t fall apart), small repairs are performed, it needs to be cleaned, refuelled, etc. Also the baggage handling, passenger check in, airport security checks and boarding are all processes that are depended on the airline schedule. Scheduling these activities is a challenge in itself, also because in most cases special skills are required to perform them. As you can imagine the scheduling of these activities influences the demand for manpower a lot. When you do a bad job at this, for example by scheduling them all at the same time, it will create a peak in the required manpower. When you are able to create a flat as possible manpower requirement profile, it is much easier to create efficient staffing levels, for each skill, as you can imagine. Sometimes the company that hires you supplies you with the required manpower and is therefore taken as given. An easy job, you might be tempted to think. However my experience indicates that having a detailed look at it and trying to influence the organisation of work is worthwhile to consider, before identifying the best staffing levels.

A typical picture of the required manpower looks like the profile of the toughest stage in the Tour the France, many peaks and valleys that need to be covered with a set of shifts of certain lengths. The objective can either be to cover the peaks at all time, or at a certain ambition level. Part of the work is then covered with hired workers. Identifying a new set of shifts involves taking into account a vast number of conditions. Collective labour agreements and governmental regulation give guidelines on the minimal and maximum duration of shifts, the number of breaks in a shift and appropriate start times of shifts. Last but not least also employee preferences or scheduling principles applied influence the shift set to be modelled. Each of these conditions needs to be translated into formal restrictions of the model, most of the time leading to a mixed integer programming model. I usually solve this kind of models by generating all possible shifts using different start times, breaks required and duration of the shifts and breaks and let the management and employee representation of the company review them. After approval I feed the shifts that are acceptable to both management and employees into the model that identifies the best set. Sometimes new shift times are out of the question, because the shifts are part of the collective agreement. In that case it is still possible to improve, since the amount of shift can still be optimized. The same model as before can be used but now the shift set is fixed to the shifts now in use. The objective function needs some attention, especially when you want to use the model also for less than 100% coverage of the required manpower. I have some good experience with an objective function that minimizes the absolute difference between the required and available manpower.

The result of the optimization could look like this. Possible savings due to better shift times or number can be large. The savings obtained varied between 5% - 30% in the projects I performed. In case of variable demand for manpower, it pays of to regularly run the analysis to see if the current shift times and number still fits the required manpower.

Thursday, 26 June 2008

Instant Optimisation

Get your organisation optimised! All you need is our optimisation software package OptimalAll, install it on your computer, add some data and hit the OPT button. After only a few minutes of number crunching, your computer will come up with the optimal solution for your optimisation challenge. You can get OptimalAll at your local store for only €99.95. Ask for the special offer to have the software installed and run with support of one of our optimisation experts for only €150,- all in.

Visit any trade fair, look in any magazine and you will find companies that offer this kind of optimisation solutions. As with any ad, things are presented a little brighter than reality. Don’t expect that buying optimisation software solves you optimisation challenge. There is more to optimisation than you think. Many troubled managers that face optimisation challenges are often tempted by the tales that are told by salesman about the capabilities of the solution they sell. Because of their focus on selling, salesmen tend to exaggerate a bit, leaving the troubled manager in seventh heaven with his just bought answer to everything. (Note: As some of you may know, you don’t need software to figure that out, it is 42). After a while he wakes up and realizes that there is more to optimisation than just buying a tool.

In my work as an optimisation consultant I come across many of these managers. Because of their past experience they think optimisation is something for the academic world, not as something that can deliver results for them. I tell them a different story. About one thing the troubled manager is right, successful optimisation in practice is not just buying software. In my opinion is consists of the combination of three things. First of all, thorough understanding of the business that the manager is in. Without that knowledge you don’t know square about the challenges that the manager faces and what the do’s and don’ts are in his/her business. Talking to people of the manager’s organisation, having a look around, gather data, analyse it, etc. will help you build up that knowledge. In that process you will also build up a clear understanding what needs to be optimized and to what extend.

Next ingredient of successful optimisation project is using that knowledge to build and tune a fit-for-purpose optimisation model. This can be a one-off model or a model that is part of optimisation software like a scheduling, rostering or vehicle routing software package. The model enables you to generate and rank various alternatives to the challenge the manager faces. When the model has been tested, the best solution can be identified. Next step than is to implement the model. This is where the software comes in as the third ingredient of a successful optimisation project.

Software is an enabler for transferring you optimisation knowledge to the organisation. Main step is training employees that will use the model in the future. The organisation needs to have some basic optimisation knowledge to use the model effectively. Another thing is integrating your model with the business ICT systems to be able to feed your model with the required data and feed the results from the optimisation run back into those systems. This will enable the organisation to perform the optimisation runs in the future by themselves and capitalize on their investment in optimization software and consulting.

Saturday, 17 May 2008

And the winner is....

In the Netherlands every year lists are published on the performance of Dutch hospitals. The lists tell the consumer which hospital scores best on a variety of key performance indicators such as customer friendliness, the number of cancelled OR sessions and the number of caesareans performed. Consulting firms, consumer agencies and even newspapers present rankings of the hospitals, each are using their own methodology. This leads to different rankings for the same hospital, leaving the to-be patient confused about the results. How should a (potential) patient decide on which hospital to go to? Wouldn’t it be better to have an overall score for each of the hospitals, so a simple ranking based on that figure can be constructed? Question is of course how to create such a ranking.

It is not only the consumer that is interested in the ranking of the hospital. Hospitals themselves also like to know where they stand compared to their piers. Since the Dutch healthcare market is changing from regulated (with fixed prices for services) to a more liberated market (with free pricing for some standardized products) a low ranking could implicate less “customers” leading to lower revenues. Also the liberation of the market leads to more pressure on effectiveness and efficiency of the hospitals. But how can we compare the performance of hospitals? How to benchmark the performance of each of them and rank the hospitals on an objective and simple way?

I came across this question in a project a few months ago. Hospitals are accustomed to benchmarking, but the drawbacks of the techniques used were something they would like to get rid of. In most cases some kind of ratio analysis is used. Clearly this doesn’t give you objective measures and does not allow for a simple ranking. Various ratios can direct in contradictory directions. So we had to search for better techniques. We came a cross a linear programming based technique, Data Envelopment Analysis. This technique supports the benchmarking questions for the hospitals but also can also be used in ranking hospitals either as a whole or per type of care.

Data envelopment analysis (DEA) is a much used technique in benchmarking, also in healthcare. Is has been developed in the 1970’s by Charens, Cooper and Rhodes. In DEA the most efficient combination of production units is constructed, called the efficient frontier. Efficiency is then measured as the distance to that efficient frontier. See for more details on DEA. By modelling the hospitals as production units with certain inputs and outputs, DEA can be used to identify which hospital converts the inputs into outputs the most efficient. The efficiency score can than be used to rank the hospitals.

A nice feature of DEA is that it can be used to determine how a hospital can improve its performance, either by lowering its use of inputs (i.e people, cost, equipment, etc), of increasing the outputs (patients served, quality, etc).

The project was performed at middle sized hospitals in the Netherlands for the clinic and day care departments. The results were received very well; next step will be to extend the benchmark to other parts of the hospital like the OR and out patient clinic.

Sunday, 27 April 2008

Express netwORk design

The last view mounts I have been busy with several projects to improve, or design from scratch, the infrastructure of the domestic networks of an express company in several countries around the world. In infrastructure planning the number and location of hubs/depots and the network design are determined. In express networks (but also in passenger, freight and communications networks) hubs are used to reduce total cost because of more efficient vehicle/network utilization. With hubs a better match can be achieved between available capacity and requested demand for it. For sure, using a hub will increase the distance traveled between origin and destination depot, but the consolidation possibilities will offset this.

Infrastructure planning is quite a challenging puzzle and one that is of high strategic importance to an express company. Getting it wrong will cause the company to invest or disinvest in the wrong locations leading to a poor overall performance of the express supply chain. Apart from the complex puzzle to solve, performing these international projects is also challenging. Especially when it is not possible to talk to each other face to face frequently enough and cultural differences also influence the project. A prerequisite in any optimization project is to have a clear idea on the objectives and requirements. As I mentioned before, this is already hard when you are speaking the same language. Guess what happens if you have to take that hurdle first. A lot of effort goes into making sure that you understand each other before you even can get to optimization part.

Let us have a look at the supply chain of an express company to see how we can address the infrastructure planning problem. The supply chain starts when a parcel is offered to be transported to a certain end destination, within a given timeframe. For example, the parcel needs to go from Amsterdam to Liege and needs to be there at 9:00 in the morning. The parcel is picked up and taken to the depot assigned to the origin location. At the depot it is decided to which hub the parcel is sent to reach its final destination within the available service window.
Using either air or road the parcel is transported to the hub, were it is sorted and put on either a line haul to another hub or to its destination depot. From the depot it is taken to its destination location. To be able to deliver the parcel on time, an express company uses a time definite network schedule to plan the transportation of the parcel in advance. In that way the express company knows the time before which a parcel needs to be picked up in order to deliver it on time at its destination, so called cut off times.

In designing an infrastructure plan, one has to decide how many depots and hubs are needed and where they should be located. Besides that, the connections between depots and hubs need to be determined. These can either be depot-depot, hub-hub or depot-hub (or vice versa) connections. For the depots you have to determine which service areas it is assigned to. Opening a depot or hub will require investments, land needs to be acquired; a building is required to store parcels and parking/maneuvering space for the vehicles. Determining size and layout of the site is a puzzle in it self. I will come back on that in a later blog entry. With the connections between the depots and the hubs also costs are involved. These costs depend on the distance traveled and the vehicles used. Since we are designing the infrastructure of an express company we also need to take the service capabilities into account. To my knowledge there is no published research that solves this kind of puzzles, taking into account the fixed cost of opening a location, the variable cost of handling material at that location and the cost involved in operating a connection between two locations, either hub or depot. So we need to be pragmatic and creative to solve it.

Our approach is to split the hub and depot location decision. In a stepwise approach we can than start identifying the best infrastructure and network design. With our specially designed model, BOSS, we determine the best possible set up of the network. BOSS determines the best possible hub location set up, minimizing the cost of the line haul (connections between the locations). Line haul cost is the major cost driver for an express network. After optimization the (fixed and variable) cost for the hubs are added. Inputs for this kind of studies are expected volumes, service offerings and possible locations. We discuss these inputs with the local management. Since infrastructure is of strategic importance several scenario’s on service and volume are taken into account. Also possible locations are discussed since not all available locations are suitable. With our analysis results we support the management in their decision on the best infrastructure plan for now and in the near future, making the trade off on cost, service and volume. Next to these criteria also carbon emission is becoming an important decision criterion in these studies.

Monday, 24 March 2008

The OR in HR; Manpower planning

There are many challenges that will make the HR manager use a more quantitative approach to HR. In this blog-entry I will focus on one of them, aging. In future entries I will present more. Human resource management is seen as a “soft” discipline and is often not treated with the respect that is deserves from other managerial disciplines. Many times the HR manager is seen as obstructive, creating obstacles that hamper the other managers of the company. Also personnel are not an explicit subject discussed in the board, other than in terms of costs. On average, personnel amounts 60% to 70% of the total capital expenditure of a company. But investments in human capital are hardly measured. The CFO/CEO takes care of these decisions because the HR manager is often not accustomed to using figures or models. It is something that will be changing rapidly in the next years. Techniques from the field of Operations Research can support the HR manager and will ensure that attention for HR subjects, other than the cost involved, will increase in the board.

Many companies are or will be facing the consequences of aging and diminishing population growth. These trends will lower the available workforce over the next years; even with stable workforce requirements it will become harder and harder to keep the available workforce at the required level.

As can be seen from the above figure, the available workforce within the company will diminish over time because of aging, increasing the gap between the required and available workforce. This means that every year, even when the company stays at the same size, more people need to be recruited to fulfil the workforce demand, putting the recruitment department under a lot of pressure. This pressure even increases because the available group of people to recruit from will decrease, because of diminishing population growth. This situation will even worsen more because companies don’t tend to be stable in terms size, also service offerings will change leading to a different set of required capabilities. How can the HR manager address all this?

Manpower planning
OR can assist the HR manager to identify the current and future effect of aging and diminishing population growth. Changes in company strategy can be analysed as well, enabling the HR manager to set its recruitment strategy in support of the company goals. As in any other OR project, the first step is to analyse the current situation, identify what the current age distribution is, what are the company goals in terms of size and capabilities of the required workforce. This will give insight in the current impact of aging. Using scenario analysis, company growth scenario’s can be identified together with the management of the company. This will give insight in the size and capabilities of the required future workforce.

Using Markov theory the current and future development of the workforce of the company can be modelled. When incorporating age, insight is created into the positions for which either aging will become a problem or for which the supply of manpower will diminish. These manpower planning models are characterized by transition probabilities that either pull employees to the next step in their carrier or push them when promotions are based on years experience or personnel motivation. In practice a combination of Push and Pull models are used. The transition probabilities for these models need to be estimated from the past.

When both the development of the required and available workforce is modelled, strategies to align the available and required workforce can be formulated and evaluated. Internal measures, the area of the HR manager, can be modelled as changes to the transition probabilities representing for example additional training, changes in remuneration or extending the time people work (over 65). External measures, the area of the other board members, like using new technologies can be modelled as changes in the future workforce demand. Using this approach the HR manager will be able to identify, together with the other managers in the board, the best strategy to ensure that the right amount and quality of people will be available at all times.

The above approach is not new, it is much used in the military, this because of the strict order of promotion. But it is not restricted to it. For example airlines use it to make sure that enough people start training to eventually become a pilot of a Boeing 747, as you can imagine an airline cannot afford to have no pilot available to fly it. More companies should use it, to make sure that they know what they will be facing in the near future. Already many companies face difficulties is attracting enough qualified people. It is time to start up Excel, build the model and start the analysis.

Thursday, 7 February 2008

The correct answer to the wrong question!

In my job I sometimes take on a student from an operations research course who is in her or his final year, to work on a master thesis. The objective of such an assignment is to get the experience of bringing OR knowledge into practice, either by working on a project within our company or getting out in the field, working for one of our customers. I prefer the students to get out in the field, as it is the most interesting for them as well as it is for me. Many times it is the first time that the students are confronted with real world challenges, giving the student a rude awakening. They discover that the real world cases that are used to train them in class are not that “real world” as they were told they are. Also, in the field they learn a very important (maybe even the most important) lesson that practicing OR is not about getting a complex mathematical model drawn up and solving it, unless they prefer a career at university.

When I was at university, seems ages ago, the primary focus of the courses was to teach us the basic techniques. We were taught fundamentals like algebra, statistics, modeling, linear programming, integer programming, simulation, Markov decision theory and queuing theory, just to name some. A few courses were on applying these techniques. During 2 to 3 semesters we were supplied with a couple of “real world” cases that we had to solve. It was fun, but far away from the world I am now working in as an operations research professional. I learned more about OR in practice in the first mounts at ORTEC than all the years at university. There was a lot of focus on the “Research”, but what about the “Operational”?

Solving the right problem and data availability were two things, among others, I never had to worry about in class, but it was my first lesson of OR in practice. How to make sure that you get data and use the correct model? The solution is not to think about data in the first place, not even think of a model. The first thing to do is to get a better understanding of what the problem is about that you are asked to solve. And no better way of understanding the problem then to go out there and see what the problem is about. See why it is a problem, how it is caused and learn what the acceptable directions to solve it are. Talking to the people that have to deal with the problem every day is the best way of learning about it. It helps you identifying what the problem really is. The management may have told you that the utilization of a machine is very low and the scheduling of jobs on the machine needs to be improved. It might be tempting to model that, but the real problem maybe in the scheduling of personnel that operates the machine. You don’t want to be the consultant that brings the correct answer to the wrong question. You will be out of business soon.

When you have identified the real problem then data gathering can start. Because you now have a good understanding of the problem, you can precisely define what data is needed taking into account the data that is available. This way nobody at the IT department gets frustrated from all of your questions. After checking the data and quantifying the problem you can start solving it. Sometimes you already have finished, because your analysis directed the management to the real problem, not the perceived one. Next step is build and solve the model. My experience is that the best result for a real world problem is not the optimal one; it is the one that provides the company with the best possible result, simplifying every day work and saving lots of money quickly. Many times a company cannot effort to wait for the optimal solution, they need results and they need it now. And that is what OR in practice is about, getting the best possible results now!

To my opinion this is an experience students taking a course in operations research should also have at university. They therefore should be working on real world challenges, supplied by real companies, with the involvement of the management of that company. They should visit the company and work there so they get to learn how to identify the real problem and acceptable directions to solve it. No nice and easy approach, in which the professor thinks up a case based on a text book example, with all the data ready on a plate.

Saturday, 26 January 2008

Express OR

I have been involved in a project for an express company for a few months now and I am quite enthusiastic about it. The express company is faced with a lot of challenges that can be viewed from an operations research professional’s perspective as being in a candy store. The supply chain that a typical express company has is rather simple. Parcels are picked up and brought to a depot using small vans. Sometimes customers bring there freight directly to the depot. From the depot, in most of the cases, the parcel is transported to a hub. There all the parcels are sorted and put on trucks. Depending on the available time to transport the parcel, it is transported via one or more hubs using either road- or air-transport. Eventually it is transported from the last hub to the depot of its destination. There it is put on a small van to transport it to its final destination.

Depending on the kind of freight, the parcel has to be delivered at a certain time. For example if you sent a document and want it to be at its destination before 9’oclock, this would be a premium service. When the document or parcel is not requested to be at its final destination that soon, it is considered to be normal freight, giving the express company more time to deliver it at is final destination. Because the transportation of the parcel or document, also called consignment, takes time, it has to be available on time at the origin depot for transport. This is also the case for the destination depot; otherwise the express company will not be able to deliver it on time at its destination.

An express company has to take various decisions in building its network. It needs to decide how many depots and hubs it wants to use (infrastructure) and how to connect the depots to the hubs (line haul schedule, including modality). Also it needs to decide on how to assign different regions to a depot to pick up or deliver (PUD) the freight and construct efficient PUD routes. A large number of depots will lower the cost of the pick up and delivery of the freight, but will increase the infrastructure cost and line haul cost. How many locations should it therefore use? Clearly there is a trade-off as can be seen in the below graph.

At the express company I have performed several projects on this subject, in several countries. Currently we are having a look at the hub infrastructure in a South American country. To solve it a network flow kind of approach could be applied, similar to the one I used for the UNJLC assignment. There is a difference however and that is the time constraint. The freight has to be on time, something that is not easy to incorporate in such a model. To solve the challenge of the express company we therefore designed a model that is capable of constructing a high level line haul schedule, including timing of the freight. The model is fed a set of possible hub locations from which a predefined number of hubs is by selected by the model. These will be the optimal infrastructure. We feed the model with a set of predefined locations because you wouldn’t want a hub to be located “in the middle of nowhere”, far away from important highways or junctions. Also the management of the express company has have special interest in certain locations, or want to fix locations in the current infrastructure. We run the model several times, varying the number of hubs to be selected. This way the best set of hub locations can be identified the set with the lowest total cost. Next step is than to construct the network schedule in detail, something that I will take up in a future blog entry. Using our model the express company is able to identify the locations which they should keep and were to invest in new ones, saving a lot of money.