Thursday, 28 November 2013

Gaining a competitive advantage; a value chain approach to analytics and optimisation


For decades companies have been exploiting new technologies to improve their performance and to gain a competitive advantage. Over time the areas in which they want to improve and the challenges they want to address haven’t changed. Every company wants to identify its most valuable customers, its most important products/markets and wants to perform its activities most efficient.  The competitiveness of a company comes from how successful it is in achieving “most valuable”, “most important” and “most efficient”. Operations Research and analytics offer new ways to achieve these goals, but how can companies determine when and how it will help them achieve their goals and gain a competitive advantage?

A company basically has two ways to improve its overall performance. First option is to improve operational effectiveness resulting in cost reductions. Operational effectiveness, most of the times, translates into standardized processes and best practices. As soon as activities become standard or best practice they are obviously going to be copied by the competition, reducing the competitive advantage to zero. Enhancing business differentiation is the second option to improve overall performance; it allows companies to ask for a premium price for the product/service, hence increasing revenue.  A sustainable competitive advantage comes from doing things differently, having a different way of competing, distinctive to the company.  Focus on competitiveness alone is not sufficient though; operational effectiveness is necessary in order to stay in the game.

In their drive to become “most efficient” many companies have changed their organisation to conform to ERP technology (=standardisation and best practices), instead of the other way around. As a result there is not much difference between companies in the way they make decisions and therefore there is little or no competitive advantage of using an ERP. The positive thing about ERP technology is that data on nearly every activity of a company is being recorded and available for analysis. Today, more and more companies are starting to use that data to populate dashboards (descriptive analytics) to learn about their current performance with the ambition to use predictive and prescriptive analytics to improve their competitiveness. The ever growing availability of other (external) data sources further strengthens this development. Operations Research and analytics provide the models and tools to exploit this (big) data, to find actionable insights but still have a structured decision making process. This offers the opportunity to remain efficient, but also become distinct and gain a competitive advantage.

Michael Porter’s value chain approach can help companies identify the areas in which they can most benefit from analytics and operations research. The value chain approach divides a company into technologically and economically distinct activities, each with a specific contribution to the value created by the company. The activities are spilt in two categories. Primary activities deal with the steps and processes required to transform raw materials into products or services, including sales. Next to these, activities like infrastructure and human resource management are identified to enable the primary activities.

Michael Porter's Value Chain
A company’s value chain is a system of interdependent activities, which are connected by linkages. These linkages exist when an activity affects the cost or effectiveness of one or more other activities of the company. For example, the quality of insourced oil (procurement) can have a high impact on the cost of refining it (operations). Linkages also require activities to be coordinated. On time delivery of a parcel ordered at Amazon requires procurement, inbound logistics, operations, outbound logistics and customer service activities to work smoothly together. As Michael Porter describes in his book On Competition, careful management of linkages is often a powerful source of competitive advantage because of the difficulty rivals have in perceiving them and in resolving trade-offs across organisational lines. A recent survey by PwCconfirms this as it showed that supply chain leaders, like Apple, often outsource production and delivery (both primary activities) but retain global control (they manage the linkages) over the core strategic functions such as new product development, sales and operations planning and procurement.  A Gartner survey of supply chain executives in 2012 highlighted that the inability to coordinate and synchronise end-to-end supply chain processes as one of the major obstacles to gain a competitive advantage.  This is exactly were operations research and analytics can offer a helping hand.

Effective decision making in supply chains, optimising linkages between a company’s activities and across companies in the supply chain, requires advanced modelling and analytical skills because of the global scale and complexity of current supply chains. Achieving the competitive advantage starts with a clear view on the current performance of the complete supply chain. Using data from the ERP system and dashboards supply chain managers now can have an instant view on the performance of any part of the supply chain. Analytics can help direct their attention to those parts of the supply chain that perform different from what is to be expected.  For example, a supply chain manager at Zara can analyse point of sales (POS) data to identify and notify suppliers of potential stock-out situations and adapt the global replenishment strategy to changes in local demand.  The insights drawn from this analysis directs actions to for example better manage inventory by identifying slow and fast movers, reducing stock levels and capital requirements throughout the supply chain. Using predictive models, combining actual sales figures, inventory levels, CRM data and data from social media, allows for the optimisation of sales promotions which will increase revenue as (potential) customers will be targeted more effectively with a better offer.  More complex questions like selecting the number and location of warehouses are best addressed with prescriptive models, as these models are capable of evaluating and optimising all linkages in the supply chain while taking into account every relevant detail. As a result a distinctive and specific supply chain design will be identified, improving asset utilization and profitability.

We are shifting from a world in which data is arranged a head of time in ERP systems, to serve a predefined process, to a world in which data is arranged at the point of use and at the moment of use to serve the decision at hand. Smart use of optimisation models will allow companies to move away from best practises and standardized way of decision making, towards a specific decision making process opening up new opportunities to gain a competitive advantage. A value chain approach provides guidance in identifying where and how operations research and (big data) analytics can best be put to work.

Friday, 13 September 2013

The Impact of Operations Research on People, Business and Society


Last week the international conference on Operations Research OR2013 took place in Rotterdam. The conference was the result of the close corporation of the GermanOR society (GOR), the Erasmus University Rotterdam and the Dutch OR society.  The conference started with a keynote from Alexander Rinnooy Kan, an expert in our field and several times ranked as the most influential Dutch Person, on “How to educate Operations Research practitioners”.  Key point in his lecture was that although Operations Research practitioners have been successful in applying their knowledge, continued education is of great importance. The obvious reason is keeping pace with new developments in research; on the other hand trends like Big Data give rise to new problems and applications, here research and practice can go hand in hand. The OR2013 conference programme had a special plenary session, led by the Chair of the Amsterdam Business School Marc Salomon, in which special attention was given to the impact of Operations Research on people, business and society.  Over 50 C-suite business representatives attended the plenary as a special guest.

Wim Fabries of Dutch Railways
The session started with Wim Fabries, Director of Transport and board member of the passenger division of Dutch Railways. Fabries illustrated that providing reliable rail transportation is complex, requiring many interrelated decisions concerning the time table, rolling stock and crew. To support these decisions Dutch Railways has a special department, the department of Process Quality and Innovation, which uses Operations Research to support these decisions. The Operations Research practitioners of Dutch Railways had their finest hour when a new and robust time table had to be constructed which facilitated the growing passenger and freight transport on the already highly utilized railway network. The new schedule was successfully launched in December 2006, providing a daily schedule for about 5,500 daily trains. This “tour de force” was rewarded with the Franz Edelman award in 2008. Fabries indicated that Operations Research continues to be of high importance. For example in effectively managing disruptions or in constructing a reliable winter schedule so people can continue to use the trains with minimal impact on their travel plans.

Pieter Bootsma of AirFrance KLM
Pieter Bootsma, Executive Vice-President Marketing, Revenue Management and Network at Air France KLM, explained that without the use of Operations Research successfully running an airline would be impossible. In nearly every process within the company Operations Research is involved.  Whether it is in strategic planning, crew management, flight operations, planning of ground services or maintenance scheduling, without Operations Research managing these processes would be nearly impossible. Given the narrow margins in the airline industry, slight improvements in efficiency can make the difference between profit and loss. In his talk, Pieter Bootsma highlighted the use of OR in Revenue management.  The essence of revenue management is to use price or availability of seats to influence customer demand. By analysing booking behaviour of passengers, Air France KLM is able to estimate the willingness to pay a certain price of each passenger category. For example business people are willing to pay more for a seat than leisure passengers and they tend to book there flights closer to the actual departure date. With this knowledge Air France KLM can use the availability of seats and set the right price to maximize revenue. As a consequence availability and/or price of a passenger seat will vary over time. Revenue management has led to a paradigm shift in the airline business as it focusses on maximizing revenue, not the number of seats occupied. By many it has been coined the single most important technical development in transportation management, showing that Operations Research can be a disruptive technology.

Luke Disney of North Star Alliance
The impact of Operations Research on humanitarian assistance was illustrated by North Star Alliance Executive Director, Luke Disney. The North Star Alliance started as a practical industry response to an urgent health problem, the spread of HIV/AIDS among truck drivers in sub-Saharan Africa, negatively impacting the distribution of relief food to hungry communities. By establishing a network of drop-in health clinics, called Roadside Wellness Centres, at truck stops, ports, rail junctions and border crossings North Start Alliance can offer mobile populations like truck drivers with essential healthcare and information.  The access to healthcare allows truck drivers to get treatment when necessary while at work, securing the distribution of relief food and road transportation in Africa. Luke Disney highlighted the impact of Operations Research with Polaris; a model that is used to optimise the placement of new and repositioning of existing RWCs, including the optimisation of staffing and inventory levels. Key in building the network of RWCs is to improve the continuity of care along the trade lanes within Africa. Continuity of care ensures that truck drivers can have access to healthcare everywhere they go. Also it ensures that medical help is in the neighbourhood when assistance is required suddenly, for example when a truck driver gets Malaria or when he loses his pills while being treated for Tuberculosis or HIV. Since financial resources are limited, the Polaris model helps the North Star Alliance to gradually build a network, having the biggest increase in continuity of care for each dollar invested.


Operations Research has proved to be the best answer to handle complexity many times in the past. It came into existence during the 2nd world war, where mathematicians revolutionized the way wars are waged and won by applying mathematics to almost any challenge in warfare. Today Operations Research has found its way in many applications that impacts business, people and society. The above stories point this out very clear. You could say it is the least known most influential scientific practice of our time.



Sunday, 1 September 2013

Will Big Data end Operations Research?

Until a few years ago the term Big Data was only known to a limited group of experts. Now it is nearly impossible to visit a website or read an article without stumbling across Big Data. A Google query results in over 1.5 billion hits in less than half a second, two years ago this was only one fifth of that number. Hits containing links to web pages on the increased number of visitors of a museum or the improvement of the supply chain performance of Tesco due to use of Big Data. You get the impression that Big Data is everywhere. Many times Big Data is positioned as the answer to everything. It is the end of theory as former Wired editor Chris Anderson wants us to belief.  The promise of Big Data seems to imply that in the sheer size of data sets there lurks some kind of magic. When the size of the data set passes some critical threshold the answer to all questions will come forward as if Apollo no longer lives in Delphi but in large data sets. Has Deep Thought become reality with Big Data and will it answer all our questions, including the ultimate question to Life, The Universe and Everything? Or a slightly simpler question whether P = NP or not? Will Big Data end Operations Research?

red = Big Data, blue = ORMS
The introduction of enterprise wide information and planning systems like ERP and the Internet has led to a vast increase in the data that is being collected and stored. IBM estimates that each day 2.5 quintillion bytes of data are generated and this rate is growing every day. So fast that 90% of the data that we have available today arose in the past 2 years. The ability to use this data can have enormous benefits. The success of companies like Google and Amazon proofs that. When sales of two items correlate at Amazon they end up in each other’s “Customers Who Bought This Item Also Bought” lists, boosting sales. The same principle is used by Google in their page rank algorithm. Using similar techniques Target was able to identify a correlation between the sales of a set of products and pregnancy. Using point of sales data and data from customer loyalty cards this correlation was used to personalise ads and offers sent to Target customers upsetting a father when his teenage daughter started to get ads for diapers and baby oil. Quite a story, but is it proof of the success of big data? We must be cautious not to be fooled by our observation bias, as we don’t know how many Target customers incorrectly received the pregnancy related ads.

As data is not objective and correlation most of the time doesn’t imply a causal link we must be cautious in blindly following what the data seems to tell us. Data is created as we gather it, it acquires meaning when we analyse it. Every analyst should know the many pitfalls in each of these steps. For example, the analysis of Twitter and Foursquare data during hurricane Sandy from the New York area showed some interesting results. The number of tweets coming from Manhattan suggested that Manhattan was worse off than Coney Island. However, the reverse was true, as smart phone ownership is much higher in Manhattan. Due to black outs, recharging smart phones became impossible, lowering the number of tweets and check-ins from Coney Island even more. A similar thing happened in the beginning of the year when Google estimated the number of flu infected people. Google estimated that 11% of the US population was infected, twice the level the CDC estimated. Cause of the overestimation probably was the media hype boosting the number of Google queries on the subject. A lot of data not a panacea after all?

Making decisions based on correlations found in data alone can bring you benefits when you are a Target customer, but can keep you from rescue when you’re living in Coney Island. Quality decision making doesn’t result from data alone, let alone the random quest for correlations in large data sets. Data however is an important ingredient for quality decision making. As Ron Howard suggests, quality decision making starts with framing the problem. The decision is supported by what you know (data), what you want (decision criteria, objectives) and what you can do (alternatives, requirements and restrictions). Collectively, these represent the decision basis, the specification of the decision. Logic (the math model) operates on the decision basis to produce the best alternative. Note that if any one of the basic elements is missing, there is no decision to be made. What to decide when there are no alternatives? How to decide between alternatives when it’s unclear how to rank them? If you do not have any data linking what you decide to what will happen, then all alternatives serve equally well. The reverse is also true, gathering data that doesn’t help to judge or rank the alternative decisions to the problem is pointless. Data is said to be the new oil. My take is that organisations shouldn’t be fixated on gathering and mining data but combine it with a structured approach on decision making. Than data will becomethe new soil for improvements, new insights and innovations.



It took Deep Though 7.5 million years to answer the ultimate question. As nobody knew what the ultimate question to Life, The Universe and Everything actually was, nobody knows what to make of the answer (42). To find the question belonging to the answer (some kind of intergalactic Jeopardy!) a new computer is constructed (not Watson) which takes 10 million years to find the answer. Unfortunately it is destroyed 5 minutes before it reaches the answer. An Operations Researcher (or Certified Analytics Professional) would probably have done a better job, starting with framing the question, gathering and validating the relevant data, constructing and calibrating a model, and finaly providing the best possible answer. I already know that it isn’t 42 or Big Data, but Operations Research.

Sunday, 7 July 2013

Incroyable, fraude aux examen

Copy of stolen exam
The last few weeks the papers were filled with the largest exam fraud ever in the Netherlands, the news even reached the New York Times. Three high school students were arrested under the suspicion of stealing copies of national tests of over 20 courses. The fraud was first detected at the end of May when a French Language test was posted online. To be save the board of Examination postponed the French test, affecting almost 17,000 students. It is unclear how many students benefited from the stolen exams, investigations are still in progress. As a precaution, all exams for the school at which the exams were stolen have been declared invalid and the students had to take the examinations again.  But will all those who benefited from the stolen exams be found, even if they studied at other schools? Luckily, fraud can be detected using mathematical techniques and sharp thinking.

Fraud and cheating is of all times. As W.C. Fields puts it: “if it is worth having, it’s worth cheating for”. In the case of the Dutch exam fraud, since most of courses are examined using multiple choice tests, statistical test can be applied to find smoking guns for fraud. During an examination, each student fills in a scantron bubble sheet. These sheets can be read using a special device that digitizes the sheet into a string similar to:

012345678           2333324424444441241255352413511441414343113412.....

The string starts with the student id number followed by the answer string. The numbers 1,2 ... map to answers A,B ... respectively. Comparing this string with the answer key will result in the number of correct answers and the overall score. When a student wants (or needs) to cheat to pass the exam, he probably doesn’t want to change all the wrong answers, that would give a way too much certainly if his past performance is not that good. So, he needs to decide which answers to change, for example by making mistakes on purpose.  It would be reasonable to expect that the more difficult the questions, the more students will get the answer wrong. Since the fraudulent student doesn’t know which questions are more difficult than others, his answers will deviate from the expected pattern as he will have more difficult questions right and more simple ones wrong. This deviation can be detected using statistical tests. In case students cooperate (or conspire), their answers will have a similar pattern. By using statistics to test for similarity of parts of the answer string these patterns can be detected as well, for example using Cohen’s Kappa statistic.


Cheating is everywhere, not only in high school exams. Using statistical tests this can be detected, even when there are no stolen exams uploaded to the Internet that act as a red flag. The tests indicate that something might be wrong and could direct further investigations. These tests don't only expose students but teachers and professors as well. In 2011 Diederik Stapel professor and dean of the School of Social and Behavioral Sciences at Tilburg University stepped down because of committing research fraud. It was his misuse of statistics that gave him away. The numbers didn’t lie, even though they were made up.

Saturday, 11 May 2013

The incredible story of the expected that is not to be expected


Once in a while I get an invitation from my bank to discuss my personal financial situation. Most of the time I ignore these invitations because I know their objective, it’s like this. After reviewing my investments, pension and mortgage the lovely lady (most of the time it is a lady, don’t know why) suggests to change my investment portfolio for better performance and puts a brochure in front of me of a new investment opportunity.  Most prominent part of the brochure is a bar chart with the expected return of the investment compared with a couple of other investments (from other banks) and of course the new opportunity does best.  But can I expect this expected return? Kahneman taught me not to trust my intuition in this, so this time I decide to go home and do some analytical experiments and decide based on the facts.

Suppose it is suggested to invest in the Dutch AEX index, an index composed of Dutch companies that are traded on the Amsterdam stock exchange. In the table below you find the annual returns over the past 30 years.

If I had invested €1,000 at the start of 1993 and reinvested the returns each year, the investment would have grown to €2,642 at the end of 2012 with an average annual return of 8.38%. The future value of the investment the bank is suggesting is calculated using the average annual return of the past and compounding it over a 20 year period. In a formula:  I · (1 + r)n, with n = 20, I = the amount invested and r= the expected annual return. Following this logic and assuming that returns from the past are representative for the future a €1,000 investment now would result in €1,000 · (1 + 8.38%) 20 = €4,999 in 20 years. Wait a minute, how does this compare to the €2,642 we calculated earlier? The €4,999 is nearly twice as much. How come that the investments are expected to do better than the past? My experiment will show why and also that the future value of the investment is improbable because it will occur less than half of the time. So what the bank is telling me to expect is not to be expected.

A small bootstrap experiment will show the improbability of the expected value. In the bootstrap I created 10.000 sequences of 20 annual returns. Each sequence is created by randomly selecting a return from the 1993-2012 period which is placed back in the sample again, repeated 20 times. The summary results of the bootstrap are summarize in the table and graph


As you can see the expected value indeed is larger (€4,985) compared the realized value of €2,642 but also that in nearly 70% of the time the future value will be lower than average. Also note that the spread between the minimum value and the average is much smaller than between the average and the maximum value, the distribution of the future values is skewed. Cause of all this is the effect of compounding, which is key in explaining why the expected is not to be expected.


The expected value of the investment is derived by compounding the initial investment with the arithmetic average of the returns. In our case 8.38% so theoretically a future value of €4,999 results. However, the median value of the investment, the value for which there is a 50% chance that we will either fail or exceed, is derived by compounding the €1,000 using the geometric average return (or constant rate of return). The geometric average return for 1993-2012 equals 4.98% therefore the theoretical median value equals €2,642.

As you can see from the graph, the compounding of returns causes the distribution of the future values to become skewed.  There are fewer values in excess of the average future value, but they exceed it with a greater amount than the amount by which the below average values fall short of the average value.  So next time you are offered an investment opportunity or new pension plan be sure to realize that the expected value is an inflated estimate of the future value, the median future value is more realistic as you can expect to achieve or exceed it half the time. The expected is to be expected in that case. Besides the now standard disclaimer that past performance is not necessary indicative for future results, banks should also remark the results shown are an overestimation of the expected results. 







Monday, 1 April 2013

Patterns; from tessellations to buying behaviour


A few weeks ago I was invited by the director of Museum Gouda, Gerard de Kleijn, to give a lecture. Not just a lecture on optimisation but one that would link an item from the museum to my profession. There are many ways in which art and math are related with the obvious topics being the golden ration, perspective, topology and fractals. Walking through the museum with Gerard, having a look at the stained glass windows, the ceramics, the art work from the Haagse School I realised that all had to do with modelling, the kind of think I do all the time using mathematics. The closest link to math I found in the various tessellations in the museum buildings which immediately gave me the theme of my lecture, Patterns. Math after all is also known as the science of patterns. See for example Hardy in A Mathematician’s Apology.

A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas
Patterns are very important to us humans. Our brains are pattern recognition machines because of our associative way of learning. Thousands of years ago these pattern recognition capabilities helped us survive because our brain could recognise the sound of a predator in the grass. A peculiar thing to note is that pattern recognition is a right brain activity, the part of the brain associated with creativity and design. The left side is associated with logic and analytical thinking. Does math modelling bring these two together?


In my lecture, starting from the tessellations in the museum buildings, I explained the math that goes into designing these kinds of tessellations. It’s basic basic geometry. A few hundred years back Sebastien Truchet (1657 –1729), a Dominican Father, was the first to study tessellation mathematically. He worked on the possible patterns one could make with square tiles that were divided diagonally. His model of pattern formation was later taken up by Fournier and is now known to mathematicians and designers as Truchet tiling. Fun to note maybe is that Truchet is also the designer of the font Romain le Roi, which we now know as Times New Roman (a designer’s worst nightmare?). Dutch designer M.C. Escher, famous for his mathematical inspired designs, took tessellations much further than Truchet. He took knowledge of many famous mathematicians like George Pólya, Donald Coxeter and Roger Penrose and created wonderful woodcuts and lithographs. He became fascinated with tessellations after he had visited the Alhambra in Granada which is filled with mind blowing ancient tessellations. How those were created still is a puzzle. In Islamic tessellations not one but many types of tiles are combined, best-known are a set of tiles called the Girihtiles. Tessellations with these tiles go back to around 1200 and their arrangements found significant improvement starting with the Darb-i Imam shrine in Isfahan in Iran built in 1453. Today it is still not clear how the craftsmen at that time were able to create these wonderful and very complex tessellations as it is believed that the level of mathematical sophistication at that time was not high enough. It was only in 2007 that research showed this type of complex tessellations are comparable to Penrose tiling, predating those with five centuries.

Patterns not only appear in tessellations alone. Patterns can be found nearly everywhere. You see them in nature, in architecture, in our DNA, in fashion, in language, in music and in our buying behaviour. But instead of the clear patterns (and beauty) in tessellations, patterns in buying behaviour need to be discovered first, using advanced mathematical techniques. The basic material for pattern discovery in this case isn’t ceramic, stone, steel, or even glass but sales slips. Many grocery stores keep track of our purchases in brick and mortar stores and on line. Dutch firm Ahold for example registers over 4 billion individual items purchased each year. They have been doing so since 2000. Imagine the amount of data available to them. Until recently this data was used to develop efficient replenishment strategies for the stores and to optimise sourcing and stock levels.  Using advanced mathematical techniques like pattern recognition and machine learning they are now in search of our buying patterns.

Compared to grocery stores from 60 years ago, many things have changed. In those days the shop owner knew us well and was able to offer us suggestions that with high probability would fit our preferences. But in the 1950’s shops grew larger and instead of the grocer we collected the groceries we wanted ourselves. As a consequence knowledge of our preferences eroded, leaving the grocer (and the producers) clueless on what products to offer us. Today by keeping track of the items sold and using advanced mathematical techniques the grocer is regaining that knowledge. By mining for patterns in the vast amount of data the grocer can create a picture of his clientele; which groups of customers (tribes) visit his store, what defines them, what brands they buy and what their preferences are. Based on this information the grocer can better target these tribes in marketing, either in paper, online or even in-store, also called micro targeting. (see my blog)

Mining for patterns is not restricted to analysing buying behaviour. The techniques available have become incredibly powerful in a range of fields, from the workplace to the voting booth, from health care to counter-terrorism. However, great care must be taken when using the results from the data mining algorithms. First the algorithms search for correlations in the available data. That doesn’t mean that there actually is a causal relation. A famous example is that ice cream consumption causes drowning. As ice cream sales increase, so does the number of drowning deaths. The conclusion is however wrong because it fails to recognize the importance of temperature. As temperature rises, both ice consumption and water based activities increase (and therefore drowning deaths). Second, careful attention must be paid on the data used in the analysis. It is tempting to use the data that is available, that doesn’t imply that it is sufficient. It’s like the drunk looking for his keys under the lamppost and not in the dark alley where he lost them because it’s to dark to see in the alley. Under the lamp the light is much better. So think about what needs to be analysed and gather all relevant data. These are only two of many prerequisites for successful pattern discovery. Searching for patterns really is a specialist’s task, even if your statistical package supplier wants you to think different.

Math can be used to design complex and beautiful patterns; it is also required to discover patterns in mountains of unstructured data. Math indeed is the science of patterns. Applying is can create enormous value and beauty, but in the hands of untrained users it will be useless. To be effective training is required, becoming a numerical craftsman takes time and experience like the masters whose work is displayed in the Museum Gouda.

Friday, 1 March 2013

Winds of Fortune or Despair?


Have you ever been to your local grocery store and get paid at the checkout instead of paying for the contents of your shopping cart? In the European electricity markets this was the case a couple of times in the past few years. Electricity prices turned negative due to excess supply of electricity and low demand. In Europe, electricity comes from various sources. A mix of coal, gas nuclear, hydro and a rising number of wind and solar power plants provide the electricity. Current technology doesn’t allow storing electricity efficiently yet, so demand and supply must be matched at all times, with price as the principle mechanism to steer supply. On the electricity market, the lowest bidder looking to supply the grid with electricity wins. Since wind and sun come at no cost, subsidies for wind or solar power production allow for bids below €0 per unit while the electric company is still making a profit.  Also because nuclear, hydro and coal fired power plants can’t be shut down without considerable cost, electric companies sometimes bid below €0 per unit, simply because the cost of shutting down and restarting is higher.

The uncertainty in electricity (spot) price is only one of the many uncertainties decision makers in the electricity market face in their decision making. The emergence of new technology increases the complexity of their decision making even more. The recent shale gas boom in the US caused electricity generators in the US to switch from coal to gas. As a consequence US coal became cheap and found its way to Europe where it displaced gas. For the future, management of electric companies in Europe are therefore faced with difficult trade-offs. The rising capacity of wind and solar power, forces them towards a flexible mix of gas and wind/solar power. The shale gas boom in the US pushes them towards the use of less flexible coal fired plants. Which portfolio is best and what should be the weights in the portfolio for each of the power sources?
Given the rising complexity of decision making and the number of uncertainties involved, electric companies will benefit strongly from adopting an optimisation based approach. In capacity investments the electric company needs to decide on number, size, and location of new power plants. Also the power source, technology and timing of the investment need to established, all under a variety of economic and technological uncertainties. It’s a complex decision, which management typically solves by simplification. Investment opportunities are intuitively evaluated on a standalone basis, considering only a couple of the most obvious uncertainties. For each investment opportunity a net present value is calculated, which is used to rank the investment opportunities. The top ones in the list are selected as the best possible opportunities and makeup the investment portfolio.

An optimisation based approach to power generation capacity investment decisions however, will allow the electric company to evaluate opportunities in a holistic and coherent manner. Instead of evaluating the investment opportunities on a case by case basis, optimisation based methods allow for connecting the forecasted cash flow of a new power plant with the forecasted cash flow of the rest of the portfolio of power plants. Using Monte Carlo, simulating the performance of the complete portfolio for future values of relevant uncertainties becomes relatively easy. Without this link it is impossible to analyse the future financial and technical performance of the electric company in a consistent manner. An optimisation based approach also allows for decision making based on facts and not biases, predispositions or beliefs because it requires an explicit definition of performance indicators for ranking and choosing between investment portfolios. By carefully analysing the portfolios that are most promising (the ones on the efficient frontier) management can find out which new power plants to invest in. If a certain investment project is part of the majority of the efficient portfolios it certainly would be a good and robust choice to invest in.


Since we can’t predict the future, certainly not for a long period ahead, the electric company must be careful not to fixate its investment strategy for the long term.  Instead, optimisation based analysis allows for periodic reruns of the portfolio model and adaption of the investment strategy as conditions change over time (like increased wind power capacity, new emission regulations, shale gas boom, etc). That way the investment decision is no longer static but becomes dynamic and allows the electric companies to benefit from the winds of fortune, instead of being taken by surprise. 

Sunday, 20 January 2013

Back to School


Each year the Dutch Operations Research society (NGB) organizes a seminar together with the Dutch Network on the Mathematics of Operations Research (LNMB). This year we organised the seminar for the 15th time, which traditionally is held in the geographical middle of the Netherlands, Lunteren. In the past few years the central theme for the seminar has been the practical application of Operations Research in a specific area, we had themes like health care, traffic, energy, supply chain optimisation, marketing and humanitarian aid. This year we decided to take a different angle. We wanted to give our members the opportunity to upgrade their knowledge of Operations Research by offering them tutorials on new developments in OR. This idea had gradually grown from the feedback we had had on several webinars and in-company lectures on topics in OR, for example on Portfolio Optimisation (with Professor Sam Savage) and on Robust Optimisation (with Professor Dick den Hertog). These lectures seemed to fill a need to keep up with new developments in OR especially for people that had graduated from university (either on PhD or Master level) some time ago. Theme for the 15thseminar therefore became Back to School.

Conference center "De Werelt", Lunteren

The two main subjects for the seminar we chose are Robust Optimisation (RO) and Mixed Integer Nonlinear Programing (MINLP). Each tutorial consisted of a theoretical introduction by a specialist in the field. For RO we invited Professor Dick denHertog form Tilburg University, for MINLP we invited Professor Jeff Linderoth (@jefflinderoth) from the University of Wisconsin-Madison. Their in-depth discussion aimed to explain the essence of the new developments and focus on the practical use of the techniques presented. Second part of the lecture focussed on the practical application. Marcel Hunting from AIMMS shared insights and modelling tricks in practical MINLP modelling. We closed the seminar with a lecture from Ruud Brekelmans on the use of MINLP in optimizing the Dutch dike heights, a project that has been elected finalist for the 2013 Franz Edelman Award.

Robust optimisation is a recent development in the field of optimisation, having its roots in the early 1970’s. It explicitly takes parameter uncertainty (e.g. measurement, estimation or implementation errors) into account without assuming a specify probability distribution of that uncertainty. Professor Den Hertog explained that instead of seeking to immunize the solution in some probabilistic sense to stochastic uncertainty, with RO a solution is constructed that is optimal for any realization of the uncertainty in a given set. RO starts with modelling the uncertainty region and creating the robust counterpart of the original model. Under certain conditions on the structure of the uncertainty the robust counterpart can be solved with relative ease using LP, CQP or a Conic Optimisation solver. From a practical point of view Professor Den Hertog’s advice on applying RO is to first test the robustness of the solution to the original problem with standard sensitivity analysis or simulation (a best practice in any optimisation challenge). If it is not robust use stochastic programming if the distribution of the uncertain parameter is known, if that’s not the case use RO. In applying RO one must avoid equality constraints when formulating the robust counterpart because this can cause inconsistencies between the solution to the original problem and the solution of the robust counterpart. So modelling expertise required! Professor Den Hertog concluded that Robust Optimisation provides a natural way to modelling uncertainty and that the robust counterpart makes it tractable. A wide variety of applications of RO has already been documented from which we practitioners can extract information for our own robust optimisation projects.

The second main topic of the seminar was Mixed Integer Nonlinear programming (or was it MINLP Wars?) by Professor Jeff “Obi-Wan” Linderoth. As he tweeted, it was a once in a life time experience for him. Professor Linderoth started with the relevance of MINLP, which comes from the fact that the world is not linear. Many decision problems in practice suffer from nonlinearity, think of water network design, petrochemical product blending, and oilfield planning. The need for practical solution methods to address these nonlinearities is therefore high. The normal approach to solve a MINLP is to relax the integrality constraints and construct a convex relaxation of the set of feasible solutions. Then using a branch and bound approach to solve the problem. Professor Linderoth did a lot of research to develop a new approach to MINLP, leveraging on the available MILP technology. In the lecture Professor Linderoth explained the backgrounds on algorithm engineering, (disjunctive) cutting planes and pre-processing. In general he concluded that applying “traditional” techniques from MILP in the domain of MINLP can lead to significant improvements in our ability to solve MINLP instances, a conclusion that Marcel Hunting from AIMMS reconfirmed. He showed how pre-processing and cutting planes were key in performance improvements in CPLEX. He also provided advice to OR practitioners on how to improve solver performance by applying specific reformulations in a MINLP. As an example he showed how the product of integer and real variable in a constraint can be pre-processing to get more robust solutions.

The practice of MINLP was illustrated by Ruud Brekelmans from Tilburg University. He explained the modelling and solution background of a project he has worked on to determine new safety standards for dike heights in the Netherlands. Stated very simple the decision problem was to determine the timing and sizing of the dike heightenings minimizing the total cost, consisting of the investment cost of the heightening of the dike and the economic loss in case of a flood. If you would like to know more about te project, please visit the Analytics conference in San Antonio, as it was elected finalist in the 2013 Franz Edelman award. We wish the team lots of luck, yet another Dutch Edelman award in the making?

General conclusion for the OR practitioners from the MINLP lectures was to not be afraid of MINLP. The continuous improvement in (MINLP) solvers creates new possibilities that were not available several years ago. But, even today it is necessary to come up with the right model formulation to solve your problem; you need to help the solvers where you can. Again modelling expertise required.

Given the significant rise (over 30%) in number of participants of the seminar compared to the previous years, we can conclude that the seminar filled a need. Also the feedback on the content and the speakers was great, so we can look back on a successful seminar. We decided that next year’s seminar will have the same format, topics to be announced. Suggestions for topics more than welcome. See you next year in Lunteren!