Study on Optimization Strategies

Description
Strategies in game theory may be random (mixed) or deterministic (pure). Pure strategies can be thought of as a special case of mixed strategies, in which only probabilities 0 or 1 are assigned to actions.

Robust Optimization Strategies for Total Cost Control in Project Management

Joel Goh

? ??

Nicholas G. Hall Melvyn Sim
†‡

? ?? † ‡

Graduate School of Business, Stanford University Department of Management Sciences, The Ohio State University NUS Business School and Risk Management Institute, National University of Singapore Corresponding author: [email protected]

February 13, 2010

Abstract

We describe robust optimization procedures for controlling total completion time penalty plus crashing cost in projects with uncertain activity times. These activity times arise from an unspeci?ed distribution within a family of distributions with known support, mean and covariance. We develop linear and linear-based decision rules for making decisions about activity start times and the crashing of activity times. These rules identify decisions that perform well against all possible scenarios of activity time uncertainty. The resulting crashing strategies are implemented in both static and rolling horizon modes. Whereas the traditional planning methodology PERT does not consider correlation between past and future performance within or between activities, our methodology models both types of correlation. We compare our procedures against PERT, and also against the alternative approach of Monte Carlo simulation which assumes more information. Extensive computational results show that our crashing strategies provide over 99% probability of meeting the overall project budget, compared to less than 45% for the previous approaches that use the same information. Expected budget overruns are similarly improved from over 25% to less than 0.1%. The relative advantages of the static and rolling horizon implementations are also discussed. We identify several managerial insights from our results.

Key words and phrases: project management; time/cost tradeo? under uncertainty; robust optimization; linear decision rule.

OR/MS Index 1989 Subject Classi?cation: Project management: PERT. Probability: stochastic model: applications. Project management: resource constraints.

1

Introduction

The use of project management as a planning methodology has expanded greatly in recent years. As evidence of this, the Project Management Institute saw its membership increase from 50,000 in 1996 to over 270,000 in 2008 (Pells 2008). Moreover, most of this increase is due to modern applications, for example information technology, with substantial further growth potential. Further, the use of shorter life cycles for products and services (Value Based Management.net 2009) is leading to increased application of project management in the development of new products and services. Also, the pace of corporate change is constantly increasing as a result of new technologies, more intense competition, and more demanding and less predictable customers (1000ventures.com 2009), and achieving this organizational change in an e?ective way requires professional project management. The two most important quanti?able performance measures in project management are completion time relative to a planned schedule, and cost relative to a budget (Might and Fischer 1985). Indeed, a widely used informal de?nition of project success is “completion of the project on time and on budget”. However, many projects fail to achieve this benchmark. For example, only 35% of software development projects are completed on time (Chaos Report 2006). When late project completion is a possibility, one possible solution is the expediting of activity times, or crashing. In practice, crashing is usually accomplished through the commitment of additional resources, such as personnel, equipment, and budget, to the individual activities. This results in an important tradeo? between completion time and cost in making crashing decisions (Klastorin 2004, Kerzner 2009). If activity times are deterministic or highly predictable, then crashing decisions can be made e?ciently using optimization models. However, in most projects, random variations in activity times threaten delays to the overall project schedule. Herroelen and Leus (2005) provide a comprehensive survey of various types of approaches to scheduling under uncertainty. In practice, failure to compensate adequately for activity time variations is a prominent cause of project failure (Hughes 1986). The traditional, and most widely used, approach for planning projects in the presence of uncertainty in activity times is PERT (U.S. Navy 1958). The PERT model requires decision makers to estimate, for each activity, optimistic, pessimistic and most likely durations. Based on an approximation to the beta distribution, these estimates are used in formulas for the expectation and standard 1

deviation of each activity time in the project. In order to build these random activity times into a simple probabilistic model of overall project completion time, PERT requires three important assumptions regarding the project’s activities and its precedence network, as follows. A1: The uncertain activity times are probabilistically independent. A2: A path with the longest expected length is also a critical path after realization of the activity times. A3: Based on the Central Limit Theorem, the total length of the critical path approximately follows the normal distribution. Each of these assumptions may be di?cult to justify both in theory and in practice. First, if two activities use the same resources, then their performance is likely to be positively correlated;hence, Assumption A1 is unlikely to hold. Second, in situations where the project network contains many paths of similar expected length, it is frequently the case that a path which is not critical based on expected activity times becomes critical once activity times are realized; hence, Assumption A2 does not hold. Finally, if there are fewer than about 30 activities in series on the critical path, then the approximation of the mean under the Central Limit Theorem in Assumption A3 is likely to be poor (Black 2008). Depending on the precedence structure, there may need to be several hundred activities in the project in order to meet this requirement. Moreover, achieving an accurate approximation to the tail probabilities of the distribution of the critical path length requires still many more activities (Barbour and Jensen 1989). An alternative to PERT that does not require Assumptions A1–A3 is Monte Carlo simulation. Assuming knowledge of an underlying probability distribution for each activity time, these times can be generated randomly and used to estimate the critical path length. However, whereas PERT is widely used in practice, Monte Carlo simulation is not (Schonberger 1981, Kwak and Ingall 2007). This is apparently because of a lack of management resources allocated to simulation analysis, and also because of inadequate computing capability. Moreover, in the context of project management, both PERT and Monte Carlo simulation are generally used as descriptive rather than prescriptive methods (Bowman 1994). Although stochastic programming has been used (Bowman 1994, Gutjahr et al. 2000) in conjunction with either technique to make prescriptive crashing decisions, the di?culty of estimating the required probability distributions (Williams 2003) limits its practical usefulness. In most projects, information about the progress of all ongoing activities within a project is 2

updated periodically, for example weekly. A standard methodology for reporting project progress is earned value analysis (U.S. Department of Defense 1962). Earned value analysis compares both the time used and the cost spent so far against the amount of progress that has been achieved in the project (Fleming and Koppelman 2005). The resulting di?erences are known as time variance and cost variance, respectively. Importantly, these variances may correlate with similar variances during the remainder of the same activity. This is especially likely if the project team, and where applicable the subcontractors, being used are the same as during the earlier part of the activity. Klastorin (2004) describes various methods for the estimation of activity time and cost at completion, based on partial activity progress. In addition, as discussed above, there may be correlation with performance on subsequent activities. Robust optimization is introduced by Soyster (1973), and popularized by Ben-Tal and Nemirovski (1998), El-Ghaoui et al. (1998), and Bertsimas and Sim (2004). There is apparently only one previous application of robust optimization to the time-cost tradeo? problem in project management. Cohen et al. (2007) consider the problem of minimizing total crash cost plus linear project completion time cost. They assume prespeci?ed interval or ellipsoidal uncertainty sets, and develop linear decision rules for crashing decisions. The size of the uncertainty sets is adjustable and depends on the level of ambiguity averseness of the decision maker. In the objective of their model, they minimize the worst-case total cost over their uncertainty sets. One concern about their proposed model is that it is di?cult to calibrate the uncertainty level for the uncertainty sets. Finally, in the objective of their model, they minimize the worst-case total cost over their uncertainty sets, which can provide very conservative results. In our opinion, these concerns limit the practical usefulness of the proposed model. In this paper, we model the objective of project completion on time and on budget by considering total project cost, consisting of a project completion time penalty plus total crash cost. Our model uses linear and linear-based decision rules (Ben Tal et al. 2004, Goh and Sim 2010) to model these decisions. Since decision rules are functions of the uncertainties in the problem, they are capable of adjusting decisions, based on the realization of uncertainties during the project. Hence, the solutions from our models use information that is received as the project progresses, to obtain activity start time and crashing decisions that perform well against all possible scenarios of activity time uncertainty. Whereas PERT does not recognize correlation between previous and future performance on a single activity, or between performance on previous and subsequent 3

activities, our methodology allows for their modeling, up to the limits of available information. Our crashing strategies are implemented in both static and rolling horizon modes. We show computationally that our crashing strategies provide a much higher probability of meeting the overall project budget than PERT-based crashing strategies. The relative advantages of static and rolling horizon implementations are also discussed. We also identify several managerial insights from our results. This paper is organized as follows. In Section 2, we provide our notation and a formal description of the problem to be studied. In Section 3, we discuss various formulations of the objective function in the problem. Section 4 describes our development of a robust optimization model for project planning with crashing. Section 5 describes a computational study that compares the performance of our crashing strategies against those of PERT and Monte Carlo simulation. Finally, Section 6 contains a conclusion and some suggestions for future research.

2

Preliminaries

In this section, we provide our de?nitions and notation, as well as a formal description of the problem studied.

2.1

General notation

We denote a random variable by the tilde sign, for example x ˜. Bold lower case letters such as x represent vectors, and bold upper case letters such as A represent matrices. Given a matrix A, its transposed ith row is denoted by ai and its (i, j )th element is denoted by Aij . We denote by e the vector of all ones and by ei the ith standard basis vector. In addition, x+ = max {x, 0} and x? = max {?x, 0}. The same notation can be applied to vectors, for example y + and z ? , which denotes that the corresponding operations are performed component-wise. For any set S , we denote by 1S the indicator function on that set. Also, we denote by [N ] the set of positive running indices to N , i.e. [N ] = {1, . . . , N }, for some positive integer N . For completeness, we assume that [0] = ?.

4

2.2

Problem description

We de?ne a project to be a ?nite collection of N individual activities, with precedence relationships between the activities de?ned as part of the project speci?cation. The project completes when every activity has completed. The project manager faces the problem of completing the project on budget, where the total project cost includes both completion time penalty cost and total crash cost. In order to achieve this, two sets of decisions are needed. First, the starting time of each activity in the project must be determined, taking into account the restrictions imposed by the precedence relationships. Second, decisions must be made about by how much to crash each activity. The project completion target, ? , and the crashing budget, B , are taken as ?xed exogenous parameters. In practice, these parameters are typically set at the project planning stage by senior management, after some input or negotiation from the project manager. In many projects, activity times are uncertain (Klastorin 2004, Kerzner 2009). We model uncertainty using a probability space ( space
N N

, B(

N

), P). We denote by z ˜ the elements of the sample
N

, which represent the uncertain activity times. The Borel ? -algebra B (

) is the set

of events that are assigned probabilities by the probability measure P. We denote by EP (·) the expectation with respect to P. The precedence relationships between the activities of a project can be represented by a directed acyclic graph, or project network. Two widely used conventions for this network are the Activity-on-Node (AON) and Activity-on-Arc (AOA) formats (Klastorin 2004). In this paper, we adopt the AOA convention. The project has N activities, and therefore the graph has N arcs. We let M represent the number of nodes in the graph, and de?ne the set of node pairs linked by arcs as E {(i, j ) : ? an arc from node i to node j } .

The bijective mapping function ? : E ? [N ] is the unique map from a node pair (i, j ) ? E to an activity k ? [N ].

2.3

Information ?ow

The uncertain activity times are revealed over time by two mechanisms. First, whenever an activity is completed, its previously uncertain activity time becomes known. We use this information to form non-anticipative linear decision rules for crashing and start time decisions. Since these 5

decision rules are de?ned as a?ne functions of the revealed uncertainties, the actual decisions implicitly adjust themselves to the already revealed activity times. We discuss the development of these decision rules in Section 4.2. Second, the project manager receives periodic updates by activity managers on the progress of each activity. We denote the length of each time period between updates by T . At the end of each time period, time variance and cost variance are used to monitor the progress of each activity and the overall status of the project with respect to the schedule and budget. Each periodic update provides the manager with updated resource information (budget, schedule, and allowable crashing) and an updated uncertainty description. Using the updated information, the project manager may choose to reallocate crashing resources between activities. We use this periodic information gain, as well as the activity completion time information described above, to develop an iterative solution to the updated model, which we discuss in Section 4.6.

3

Objective Function

As discussed in the Introduction, the project manager faces the two, usually con?icting, objectives of completing the project on schedule and within budget. We consider the objective of minimizing the total project cost, which includes completion time penalty cost plus total crashing cost. Since ˜, is also the individual activity times are subject to uncertainty, the project completion time, T uncertain. In many cases, when projects run past their deadline, a contract penalty is incurred. This penalty can be modeled as a continuous function of the project completion time in excess ˜ ? ? . The penalty function has the following properties: of the deadline, i.e. p T 1. t ? 0 ? p(t) = 0. 2. p(·) is nondecreasing and convex. The ?rst property implies that if the project is completed on schedule, no additional cost is incurred. The second property increasingly penalizes later completion. Both properties are representative of typical business practice (Kerzner 2009). ˜ , forms the other cost component. Although the The activity crashing cost, denoted by C crashing cost is a deterministic function of the amount of crashing, decisions about crashing are ˜ is ultimately an uncertain parameter driven by uncertainties in the activity times. Hence, C 6

˜ = as well. A possible objective is to maximize the probability that the combined cost, V ˜+p T ˜ ? ? is within the project budget, B . However, maximizing success probability fails C to consider the extent of overrun relative to a budget that is missed. Example 1. Consider two project completion scenarios with an overall project budget of $10m. Scenario A completes the project at a combined cost of $9m with probability 79%, and $11m with probability 21%. Scenario B completes the project at a combined cost of $9m with probability 80%, and $1,000m with probability 20%. Under the objective of probability maximization, scenario B is preferred. However, few managers would prefer this alternative, because it ignores tail risk. A similar argument is used in the ?nance literature to justify the use of Conditional Value-at-Risk instead of Value-at-Risk (Alexander and Baptista 2004). Brown and Sim (2009) introduce quasiconcave satis?cing measures (QSMs), a class of functions for evaluating an uncertain prospect against an aspiration level known as a satis?cing measure. The probability of success is a member of this class. QSMs favor diversi?cation behavior by the decision maker. We adopt the CVaR satis?cing measure, which is a QSM de?ned as follows: ˜ representing uncertain expenditure, and a budget B , De?nition 1 Given a random variable V the CVaR satis?cing measure is de?ned by ˜) = ?B (V ˜ ) ? B } if feasible sup{? ? (0, 1) : µ? (V 0 otherwise

where µ? is the ? -Conditional Value-at-Risk (Rockafellar and Uryasev 2000) given by ˜ ) = inf µ? (V
v?

v+

1 ˜ ?v EP V 1??

+

.

˜ ), observe that if V ˜ always falls below the budget, To understand the satis?cing properties of ?B (V ˜ ) ? B for all ? ? (0, 1) and hence, ?B (V ˜ ) = 1. On the other hand, if V ˜ never B , then µ? (V ˜ ) > B and hence, ?B (V ˜ ) = 0. The CVaR satis?cing measure is achieves the target, then µ? (V introduced by Chen and Sim (2009) using the terminology shortfall aspiration level criterion. ˜ ) has the largest lower bound for the Brown and Sim (2009) show that, among all QSMs, ?B (V ˜ ? B ). Moreover, Brown et al. (2009) show that the measure is able success probability, P(V to resolve classical behavioral paradoxes that arise in expected utility theory. Furthermore, the solution obtained by optimizing the satis?cing level often provides a high probability of project 7

success. For example, if uncertainties are normally distributed, optimal solutions found under the CVaR satis?cing measures are also optimal under the probability measure. We also substantiate this claim with an extensive computational study. Another justi?cation for using the CVaR satis?cing measure in the objective is that the alternative of a probability measure does not consider tail risk. Chen and Sim (2009) show that ˜ < B , then if EP V ˜ ) = sup EP min{1, ?(B ? V ˜ )} > 0. ?B (V
?>0

(1)

This representation re?ects, ?rst an indi?erence toward costs that fall within the budget by a
1 margin of at least ? , and second an increasing aversion toward costs that exceed the budget by

larger amounts.

4

Robust Optimization Model

In this section, we apply certain assumptions to the structure of the activity time uncertainty in the problem, in order to formulate a model that is computationally tractable. In keeping with practical situations, the project manager rarely has full knowledge of the probability distribution, P. Hence, we assume only certain distributional properties of the uncertain activity time such as its mean, covariance matrix, and support, which characterize a family of distributions F containing P. We describe F in Section 4.1 below. Because decision makers are typically ambiguity averse (Ellsberg 1961), we evaluate the worst-case objective over the family of distributions F, that is ˜ ? ?p T ˜?? Z0 = sup inf EP min 1, ?B ? ?C
?>0 P?F

,

(2)

˜?? using a similar representation to (1) where sup EP C + p T
P?F

< B . Somewhat surprisingly,

by imposing this additional layer of modeling complexity, and using modest assumptions on the structure of p(·) and F, we can compute a tractable bound on Z0 .

4.1

Model of uncertainty

We assume the family of distributions, F, is described by the following distributional properties. Support. We denote by W ?
N

the smallest bounded convex set containing the support of

z ˜. For example, if the actual support of z ˜ is nonconvex, we take W as its convex hull. 8

We further assume that W is a full dimensional tractable conic representable set, that is, a set that can be represented, exactly or approximately, by a polynomial number of linear and/or second order conic constraints. ¯ the mean of z Mean. We denote by z ˜. Instead of modeling the mean as a precisely known ¯ is itself uncertain, with corquantity, we consider a generalization in which the mean z ˆ . We again assume that responding, possibly unbounded, support contained in a set W ˆ is a tractable conic representable set. This includes the case of a known mean, which W ˆ being a singleton set. corresponds to W Covariance. We denote by ? the known covariance of z ˜.

4.2

Decision rules

Ideally, we would like the crashing and activity start time decisions to be decision rules (DRs), i.e. measurable functions of the underlying uncertainties. As the project progresses, we impose nonanticipativity requirements on the DRs, thus requiring that they have functional dependencies only on previously revealed uncertainties. We model the non-anticipativity requirements using an information index set I ? [N ]. Each DR has functional dependence only on the components of the uncertainty vector with indices within the DR’s corresponding information index set. Speci?cally, we de?ne the parametrized set of functions as Y (m, N, I ) f:
N

?

m

: f

z+
i? /I

?i ei

= f (z ), ?? ?

N

.

The ?rst two parameters specify the dimensions of the codomain and domain of the functions, respectively, while the ?nal parameter enforces the non-anticipativity requirement. For example, if we have w ? Y (3, 4, {2, 4}), then the DR w(·) is a function characterized by w(z ˜) = w(˜ z2 , z ˜4 ), ?z ˜?
4

. Since 1 and 3 are not included in the information index set of this

DR, we have no guarantee that the values of z ˜1 or z ˜3 will be revealed before we make the decision w(·), and correspondingly, we cannot use the values of z ˜1 or z ˜3 in this DR. Hence, denoting the activity start times as x(z ˜) ? y (z ˜) ?
N M

and the activity crashing decisions as

, we can write xi ? Y (1, N, Iix ), ?i ? [M ] , y yk ? Y (1, N, Ik ), ?k ? [N ] , 9

y N where the structure of {Iix }M i=1 and {Ik }k=1 depends on the precedence constraints in the project

network. Information index sets and their associated DRs provide su?cient ?exibility to model the robust crashing problem, as in Section 4.3. However, the use of general DRs typically makes the optimization model computationally intractable (Ben-Tal et al. 2004). Therefore, instead of considering DRs from the space of all functions Y , we restrict these decisions to be a?ne functions of the uncertainties. These linear decision rules (LDRs) are approximations that ensure computational tractability. They also allow simplicity of implementation and can easily be extended to more complex bi-de?ected LDRs (Goh and Sim 2010) for closer approximation. Speci?cally, we de?ne the parametrized set of a?ne functions as L(m, N, I ) = f:
N

?

m

: ?(y 0 , Y ) ?

m

×

m×N

:

f (z ) = y 0 + Y z Y ei = 0, ?i ? /I

.

(3)

For LDRs, the non-anticipativity requirements take the explicit form of a set of linear equality constraints. For example, if w ? L(3, 4, {2, 4}), then the LDR w(z ˜) can be represented as the a?ne function w(z ˜) = w0 + W z ˜, with the additional constraints that the ?rst and third columns of the 3 × 4 matrix W must be zero. An equivalent representation is w(z ˜) = w0 + z ˜2 w2 + z ˜4 w4 . If w(z ˜) is used in an optimization procedure, then we solve for optimized values of (w0 , w2 , w4 ). Observe that the actual numerical value of w(z ˜) is known only after the uncertainties (˜ z2 , z ˜4 ) have been revealed.

4.3

Project crashing model

We assume without loss of generality that the project commences at node 1 and terminates at ˜ = xM (z the ?nal node M . Hence, the project completion time is expressed as T ˜). The crashing cost for the k th activity can be expressed as a function ck :
+

?

+

of the

amount of crashing for that activity. This function is nondecreasing and weakly convex on its domain, and has ck (0) = 0. In practice, a typical cost function is increasing, piecewise-linear and convex (Kelley and Walker 1959), where each piece of the function represents a di?erent type of resource used to crash the activity, and each resource type has its own linear cost rate. The ˜ = N ck (yk (z ˜)) . total crashing cost is then the sum of the individual crashing costs, C k=1 Substituting these quantities into the robust objective function (2), we obtain the following

10

static robust optimization model for project crashing:
N ? Z0 = x(·),y (·),?

max s.t.

P?F

inf EP

min 1, ?B ? ?
k=1

ck (yk (z ˜)) ? ?p (xM (z ˜) ? ? )

xj (z ˜) ? xi (z ˜) ? (˜ zk ? yk (z ˜))+ ?(i, j ) ? E , k = ?(i, j ) 0 ? y (z ˜) ? u x(z ˜) ? 0 xi ? Y (1, N, Iix ), ?i ? [M ] y yk ? Y (1, N, Ik ), ?k ? [N ] ? ? 0.

(4)

In model (4), u is a vector that represents the practical limit on how much each activity can be crashed. This parameter is part of the project speci?cation. The solution (x(·), y (·)) to problem (4) is a set of decision rules that prescribes the start time of each node and the amount of crashing for each activity. Finally, the (˜ zk ? yk (z ˜))+ term in the model ensures that the crashed activity time remains nonnegative.

4.4

Example project

In order to compare our crashing procedure with several PERT-based and simulation-based heuristics, we consider a project with the network shown in Figure 1. Though small, this network su?ces to illustrate several of the ideas in the discussion that follows. An extensive computational study using randomly generated networks appears in Section 5.

Figure 1: Example Project Network. In the example in Figure 1, there are four activities and four nodes, i.e. N = 4 and M = 4. There are two paths from node 1 to node 4. Path A comprises activities 1-2-4, and path B comprises activities 3-4. The cost and schedule parameters are given as follows. The scheduled 11

completion time ? = 0.9 weeks, and the penalty cost function p(·) is a linear cost rate of p = $4 per week in excess of ? . The cost of crashing each activity is described by a linear cost rate of $5 per week for activities 1 and 3, and $1 per week for the other activities, i.e. c = [5, 1, 5, 1] . Each activity can be crashed by at most one week. The project budget is B = $2. Each activity has random completion time between 0 and 1, i.e. the support of the uncertainties is the unit hypercube. The mean of each activity is 0.5 weeks, µ = 0.5e, and the covariance matrix is ? ? 1 9/17 0 0 ?9/17 1 0 0? ?. ? = 0.09 ? ? 0 0 1 0? 0 0 0 1 PERT is essentially a descriptive methodology for estimating project completion time. We now describe several heuristic crashing strategies for this project, based on PERT. The simplest strategy is not to crash any activities, and reserve all our budget to pay the penalty cost. This is denoted as Heuristic NOCRASH, with crashing decision vector y = [0, 0, 0, 0] . We use this solution as a baseline strategy for comparison. An alternative strategy is to use the standard model for PERT, which analyzes the network based on expected activity times. Under this model, only path A is critical, and in order for the project to have an expected completion time within the scheduled deadline, we have to crash activity 4 by at least 0.6 weeks. Following the procedure of Siemens (1971), activity 4 is chosen over activity 2, even though both have the same crash cost rate, since activity 4 lies on both paths. This strategy is denoted as MINCRASH and has crashing decision vector y = [0, 0, 0, 0.6] . This leaves $1.4 from the budget to pay any penalty costs associated with late project completion. Finally, we compare against a heuristic based on Monte Carlo simulation, which we denote by MCSIM. Monte Carlo simulation has several advantages over PERT. It uses a richer description of the uncertainties, and does not require the asymptotic approximation of normality that PERT uses. Moreover, Monte-Carlo simulation directly models activity time correlation and overtaking of the critical path, two issues that are ignored in PERT. Monte Carlo simulation is essentially a descriptive technique for project analysis, rather than a prescriptive one. However, two streams of research apply Monte Carlo simulation results prescriptively. Bowman (1994) develops a procedure for using simulation results prescriptively in a time-cost tradeo? problem with the objective of minimizing the expected project completion time. Two important assumptions of the procedure are the existence of known probability density 12

functions, and independence of the uncertain activity times. Simulation based estimates for derivatives of the expected completion time with respect to activity time distribution parameters are found and incorporated into a greedy crashing heuristic. Gutjahr et al. (2000) develop an enumerative stochastic optimization approach to make binary crashing allocation decisions that minimize the total of crash cost and expected penalty for late project completion. They make the same two assumptions. By contrast, our work explicitly models correlation between the activity times and optimizes over the worst case probability distribution. In order to obtain prescriptive information at low computational cost, we now develop a benchmark heuristic for obtaining crash times from the simulated sample path data. The objective in our heuristic is the same as that used by Gutjahr et al. (2000). Assuming that we have already simulated L random vectors of activity times from some distribution, which we collect into a matrix Z ?
N ×L

, we solve the following two-stage stochastic optimization problem using

sampling average approximation. 1 ck yk + min p (XM l ? ? ) X ,y L k=1 l=1 s.t. Xjl ? Xil ? (Zkl ? yk )+ , ?(i, j ) ? E , k = ?(i, j ), l ? [L] 0?y?u X ? 0,
N L

(5)

and use optimal vector y ? as the heuristic crash vector. We note that this is a simpli?cation of the desired crashing decisions, which should be functions of the activity times as in model (4). However, providing such ?exibility in crashing decisions would make the stochastic optimization problem highly intractable. Each column of the matrix X ? times in the corresponding sample of activity times. We summarize these various crashing strategies in Table 1. Strategy NOCRASH MINCRASH MCSIM Crashing Decision Vector, y [0, 0, 0, 0] [0, 0, 0, 0.6] From solving model (5)
M ×L

represents the node start

Table 1: PERT-Based Heuristic Crashing Strategies.

13

4.5

Approximate solution of project crashing model

Due to the intractability of the optimal crashing model (4), we need to construct tractable approximations. We begin by assuming that the project penalty cost and activity crashing costs are piecewise-linear convex functions, represented respectively as p(x) = max {pm x + qm }
m?[K ]

and ck (y ) = max {ckm y + dkm } , ?k ? [N ] ,
m?[L]

where [K ] and [L] denote the index sets of the linear cost function pieces in each case. Such piecewise-linear costs arise naturally in practice (Kelley and Walker 1959) and they also approximate general convex functions well. 4.5.1 Linear decision rule approximations

We begin with a linear approximation of problem (4), where we restrict all decision rules to be LDRs. This LDR model involves solving the following robust optimization problem. By introducing auxiliary decision rules r , s, and t, model (4) can be equivalently expressed as 1 ? ZLDR = min sup r0 + r z ˆ
0 0 variables ?, t , t, r0 , r , {x0 i , xi }i=1 , {yk , y k , sk , sk }k=1 ˆ z ˆ?W 0 M N

s.t.

0 0 0 1 ? ?B + N k=1 sk + t ? r + (sk + t ? r ) z ? 0, 0 0 ckm yk + ?dkm ? sk + (ckm y k ? sk ) z ? 0, 0 pm x0 M + ?qm ? ?pm ? ? t + (pm xM ? t) z ? 0, 0 0 0 xj ? xi + xj z ? xi z ? ?zk ? yk ? yk z, 0 0 xj ? xi + x j z ? x i z ? 0 , 0 0 ? yk + y k z ? ?uk , 0 x + xi z ? 0, r0 + r z ? 0, ??0 xij = 0, ykj = 0,

?z ?z ?z ?z ?z ?z ?z ?z

?W ? W , ?m ? [L] , ?k ? [N ] ? W , ?m ? [K ] ? W , ?(i, j ) ? E , k = ?(i, j ) ? W , ?(i, j ) ? E ?W ? W , ?i ? [M ] ?W

?i ? [M ] , ?j ? / Iix ?k ? [N ] , ?j ? / Iiy .

(6)

This model is a standard robust linear optimization problem, which can be solved using robust counterpart techniques from the literature of robust optimization (Ben-Tal and Nemirovski 1998, Bertsimas and Sim 2004). The tractability of model (6) follows from the structural assumptions ˆ . In particular, if W and W ˆ are polyhedral, model (6) reduces to a simple linear on W and W 14

program. We summarize the relationship between this linear approximation and the original problem in the two theorems that follow. Theorem 1 If model (6) has a feasible solution where ? > 0, then x? ˜) = i (z ? yk (z ˜) = is a feasible solution to (4). Proof. By assumption, x? (z ˜) and y ? (z ˜) are a?ne functions of z ˜. From the last two constraints of model (6), and recalling the de?nition (3) of L(m, N, I ), we have
i i x? i ? L(1, N, Ix ) ? Y (1, N, Ix ), ?i ? [M ] , k k ? ), ?k ? [N ] . yk ? L(1, N, Iy ) ? Y (1, N, Iy 1 ? 1 ?

(x0 ˜) , ?i ? [M ] i + xi z 0 (yk + yk z ˜) , ?k ? [N ]

The ?rst through third constraints in model (4) are implied by the fourth through seventh constraints in model (6). The other constraints in model (4) are implied by their corresponding constraints in model (6).
? Theorem 2 The following inequality holds: Z0 ? ZLDR .

Proof.
? Z0 = 1 ? min sup EP (r(z ˜))

s.t.

1 ? ?B + N ˜) + t(z ˜) ? r(z ˜) k=1 sk (z ckm ?yk (z ˜) + ?dkm ? sk (z ˜), pm ?xM (z ˜) + ?qm ? ?pm ? ? t(z ˜), xj (z ˜) ? xi (z ˜) ? z ˜k ? yk (z ˜), xj (z ˜) ? xi (z ˜) ? 0, 0 ? y (z ˜) ? u x(z ˜) ? 0 r (z ˜) ? 0 ??0 xi ? Y (1, N, Iix ), y yk ? Y (1, N, Ik ), s ? Y (N, N, [N ]) r, t ? Y (1, N, [N ]).

P?F

?m ? [L] , ?k ? [N ] ?m ? [K ] ?(i, j ) ? E , k = ?(i, j ) ?(i, j ) ? E (7)

?i ? [M ] ?k ? [N ]

From any feasible solution of (6), we can form the a?ne functions ˜) x? i (z ? yk (z ˜) ? sk (z ˜) r? (z ˜) t? (z ˜)
1 ˜) , ?i ? [M ] = ? (x0 i + xi z 1 0 = ? (yk + y k z ˜) , ?k ? [N ] 0 = sk + sk z ˜, ?k ? [ N ] = r0 + r z ˜ = t0 + t z ˜.

15

By a similar argument to that in the proof of Theorem 1, any feasible solution to (6) results in a feasible solution to (7). Furthermore, since their objective functions coincide, we have
? ? ZLDR . Z0

Theorem 1 implies that solving the robust optimization problem (6) for the LDR approximation results in feasible candidate LDRs for early start times and crashing decisions. Theorem 2 shows that the objective function value to problem (6) can be used as a lower bound on the
? unknown optimal value Z0 .

4.5.2

Piecewise linear decision rule approximation

In this section, we extend the LDR approximation (6) by using piecewise linear decision rules. This enables us to model the covariance matrix ?. To do so, we solve the distributionally robust optimization problem
? 1 ? ZP ˜))+ W LDR = min sup EP (r (z P?F

+ sup EP (1 ? ?B + e s(z ˜) + t(z ˜) ? r(z ˜))
P?F N L

+

+ + +pK

P?F k=1 m=1 K m=1 P?F

sup EP (ckm yk (z ˜) + ?dkm ? sk (z ˜))+

sup EP (pm xM (z ˜) + ?qm ? ?pm ? ? t(z ˜))+ sup EP
P?F

?z ˜?(i,j ) ? xj (z ˜) + xi (z ˜) ? y?(i,j ) (z ˜)
+

+

(i,j )?E

(8)

+pK
(i,j )?E P?F

sup EP (?xj (z ˜) + xi (z ˜))
P?F

+ sup EP cL (y (z ˜))? + d (y (z ˜) ? ?u)+ +pK e sup EP (x(˜ z ))? s.t. xi ? L(1, N, Iix ), ?i ? [M ] y yk ? L(1, N, Ik ), ?k ? [N ] s ? L(N, N, [N ]) r, t ? L(1, N, [N ]),
N P?F

where we de?ne the parameter d ?

as d

(pK ? c1 )+ . {i ? [M ] : (i, j ) ? E}, {(i, j ) ? E : i ? P (j )} ?
i?P (j )

Before de?ning the decision rules for the original problem, we ?rst de?ne the following sets for notational simplicity. For any j ? [M ], let P (j ) N (j ) {j } ?
i?P (j )

N (i),

and

A(j ) 16

A(i).

P (j ) represents the set of nodes that are immediate predecessors of j , while the recursively de?ned N (j ) represents the set of all nodes that are predecessors of j . Similarly, A(j ) represents the set of arcs that enter into all the predecessor nodes of j . We relate problem (8) to the original problem (4) and the LDR approximation (6) using the two theorems that follow. Theorem 3 If model (8) has a feasible solution (?, x(z ˜), y (z ˜), s(z ˜), r(z ˜), t(z ˜)) where ? > 0, then for each m ? [M ], ? x? ˜) m (z 1? ? ˜) + = ?xm (z ?? ? 1? ? + ? ?? and y ? (z ˜) = min max is a feasible solution to (4). Proof. By construction, the decision rules x? and y ? satisfy the non-anticipativity requirements. We therefore have:
i x? i ? Y (1, N, Ix ), ?i ? [M ] ? k yk ? Y (1, N, Iy ), ?k ? [N ] .

? (xi (z ˜)) +
i?N (m) (i,j )?A(m) k=?(i,j ) ?

? (yk (z ˜) ? ?uk ) ? ?
+?

?
+ +?

(i,j )?A(m) k=?(i,j )

? (?z ˜k ? yk (z ˜) ? xj (z ˜) + xi (z ˜)) + (?xj (z ˜) + xi (z ˜)) ? , ?

1 y (z ˜), 0 , u ?

The second constraint of model (4) is satis?ed directly from the construction of y ? (z ˜). The third constraint is satis?ed since, for all m ? [M ], x? ˜) ? m (z 1 1 xm (z ˜) + (xm (z ˜))? ? ? 1 = (xm (z ˜))+ ? ? 0. (since m ? N (m))

To show that the ?rst constraint of model (4) is satis?ed, we consider an arbitrary arc on the network, (i, j ) ? E . We ?rst observe that N (i) ? N (j ), and A(i) ? A(j ). De?ning k = ?(i, j ),

17

we have
? ? x? ˜) ? x? ˜) + yk (z ˜) j (z i (z ? ? xj (z ˜) ? xi (z ˜) + (yk (z ˜) ? ?uk )+ + (?z ˜k ? yk (z ˜) ? xj (z ˜) + xi (z ˜))+ + ?yk (z ˜) + + = xj (z ˜) ? xi (z ˜) + (yk (z ˜) ? ?uk ) + (?z ˜k ? yk (z ˜) ? xj (z ˜) + xi (z ˜)) ? + + yk (z ˜) + (yk (z ˜)) ? (yk (z ˜) ? ?uk ) ? ?z ˜k + (xj (z ˜) ? xi (z ˜) + yk (z ˜) ? ?z ˜k ) + (?z ˜k ? yk (z ˜) ? xj (z ˜) + xi (z ˜))+ = ?z ˜k + (xj (z ˜) ? xi (z ˜) + yk (z ˜) ? ?z ˜k )+ ? ?z ˜k .

(9)

The ?rst inequality results from the inclusions N (i) ? N (j ) and A(i) ? A(j ), which cancel out many of the common terms. The remainder of the terms are nonnegative and give rise to the inequality. The ?nal equation is a result of applying the identity x + x? ? x+ . Similarly, we have ? x? ˜) ? x? ˜) j (z i (z ? xj ( z ˜) ? xi (z ˜) + (?xj (z ˜) + xi (z ˜))+ = (xj (z ˜) ? xi (z ˜))+ ? 0.

(10)

? ˜) ? x? ˜) ? (˜ z ? yk (z ˜))+ . Together, (9) and (10) imply the ?rst constraint of (4), x? i (z j (z

Since all the constraints are satis?ed, the piecewise-linear decision rules x? and y ? are feasible in (4). In order to prove the next theorem, we need the following preliminary result. Lemma 1 Any convex piecewise-linear function of a scalar-valued decision rule, with any information index set I , i.e. x ? Y (1, N, I ), can be equivalently expressed as
K k?[K ]

max {ak x(z ˜) + bk } =

t?Y (1,N,[N ])

min

t(z ˜) +
k=1

(ak x(z ˜) + bk ? t(z ˜))+ .

Proof. For any z ? W , de?ne V (z ) scalar, and therefore

maxk?[K ] {ak x(z ) + bk }. Since z is ?xed, x(z ) is a known

K k?[K ]

max {ak x(z ) + bk } = min v +
v? k=1 N

(ak x(z ) + bk ? v )+ .

Further, de?ne v ? :

? , by
K

v (z )

?

arg min v +
v? k=1

(ak x(z ) + bk ? v )+ .
K k=1

(11) (ak x(z ) + bk ? t(z ))+ }.

By de?nition, v ? ? Y (1, N, [N ]). Hence, V (z ) ? mint?Y (1,N,[N ]) {t(z )+

Suppose, for purposes of contradiction, that the previous inequality is strict for some z ? ? W 18

for the optimized decision rule t? . This implies that
K K

t? (z ? ) +
k=1

(ak x(z ? ) + bk ? t? (z ? ))+ < v ? (z ? ) +
k=1

(ak x(z ? ) + bk ? v ? (z ? ))+ ,

which violates the de?nition of v ? in (11). Hence, we have
K

V (z ) =

t?Y (1,N,[N ])

min

t(z ) +
k=1

(ak x(z ) + bk ? t(z ))+ ,

for all z ? W , which implies the statement in the lemma.
? ? ? ZP Theorem 4 The following inequality holds: Z0 W LDR ? ZLDR .

Proof. Given a feasible piecewise-linear decision rule (x? , y ? ) obtained from model (8), Theorem 3

19

shows that it is also feasible in (4). Under this decision rule, the objective becomes
? 1 ? Z0

?
P?F

N m?[L] ? max {ckm yk (z ˜) + dkm } + ? max {pm x? ˜) ? pm ? + qm } M (z m?[K ]

+

? ?

= min sup EP ? 1 ? ?B + ?

k=1 ?? 1 ? ?B ?? N ?? ?? + max {ckm yk (z ˜) + dkm } + max {pm xm (z ˜) ? ?pm ? + ?qm } ?? m?[L] m?[K ] ?? k=1 ?? ?? + ckL (yk (z ˜))? + (pK ? ck1 )+ (yk (z ˜) ? ?uk )+ ?? ?? (i,j )?A(M ) (i,j )?A(M ) ? k=?(i,j ) k=?(i,j ) ? min sup EP ?? ?? P?F ? (?z ˜k ? yk (z ˜) ? xj (z ˜) + xi (z ˜))+ ?? +pK ?? (i,j )?A(M ) ?? ?? k=?(i,j ) ?? ?? +p (?xj (z ˜) + xi (z ˜))+ + pK (xi (z ˜))? K ?? ? (i,j )?A(M ) i?N (M ) k=?(i,j ) ?? 1 ? ?B + N ˜) + t(z ˜) k=1 sk (z ?? N L ?? ?? + (ckm yk (z ˜) + dkm ? sk (z ˜))+ ?? ?? k=1 m=1 ?? L ?? ?? + (pm xm (z ˜) ? ?pm ? + ?qm ? t(z ˜))+ ?? ?? m=1 ?? N N ? min sup EP ? ? ? ?? + P?F (pK ? ck1 )+ (yk (z ˜) ? ?uk )+ c ( y ( z ˜ )) + ?? kL k ?? k=1 k=1 ?? + ?? +p ? z ˜ ? y?(i,j ) (z ˜) ? xj (z ˜) + xi (z ˜) K ?(i,j ) ?? ?? (i,j )?E ?? ?? M ?? + ? +pK (?xj (z ˜) + xi (z ˜)) + pK (xi (z ˜))? (i,j )?E i=1

?+ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

?+ ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ?

? 1?

? ZP W LDR .

The ?rst inequality results from applying the de?nitions of (x? , y ? ) and the assumptions pK ? pm , ?m ? [K ] and ckL ? ckm ? ck1 , ?m ? [K ] , k ? [N ]. We also use the inequality (pK ? ck1 )+ ? (pK ? ck1 ). The second inequality is a consequence of applying Lemma 1, and introducing auxiliary decision rules s ? Y (N, N, [N ]) and t ? Y (1, N, [N ]). Also, we use the fact that since M is the terminal node, N (M ) = [M ] and A(M ) = E , which simpli?es the summations. Finally, the last inequality results from adding and subtracting another auxiliary decision rule r ? Y (1, N, [N ]) and applying subadditivity of the (·)+ and sup EP (·) operators.
? We now show that ZP W LDR ? ZLDR . For any feasible solution to (6), we form the following P?F

20

a?ne functions

xi (z ˜) yk (z ˜) sk (z ˜) r(z ˜) t(z ˜)

= = = = =

( x0 ˜) , ?i ? [M ] i + xi z 0 ( yk + y k z ˜) , ?k ? [N ] s0 + s z ˜ ?k ? [N ] k k , 0 r +rz ˜ t0 + t z ˜,

which are feasible for (8). Moreover, since the inequalities of (6) are satis?ed by a feasible solution, the nonlinear terms in the objective of (8) vanish, and the expression for the objective
? coincides with (6). Hence, we have Z0 ? ? ZP W LDR ? ZLDR , as required.

In problem (8), the only explicit constraints remaining are the non-anticipativity constraints for the decision rules. The other constraints are implicitly satis?ed by our construction of x? (z ˜) and y ? (z ˜) in Theorem 3. This property is atypical in distributionally robust optimization models, and follows from the particular structure of the project crashing problem. In particular, the proof of Theorem 3 relies substantially on the properties that N (i) ? N (j ) and A(i) ? A(j ), for any arc (i, j ) ? E on the project network. Together with the linearity of the constraints, this allows problem (8) to be expressed as shown. Theorems 3 and 4, respectively, show that the piecewise linear rules are feasible decision rules
? and lead to a tighter approximation of the optimal objective Z0 than LDRs. However, solving

problem (8) is not as straightforward as solving (6), due to the piecewise-linear terms within the expectations. Nonetheless, we can solve this problem approximately, by using the following upper bound. Proposition 1 (Goh and Sim 2010) ˆ , EP (z If F = P : P (z ˜ ? W ) = 1, z ˆ = EP (z ˜) ? W ˜z ˜) = ?+z ˆz ˆ ? y0, y where ? 1 (y 0 , y ) ? 2 (y 0 , y )
s?

, then y0 + yz ˜
+

min ? 1 x0 , x + ? 2 y 0 ? x0 , y ? x ? sup EP 0
x ,x P?F

,

inf

N

sup {s z ˆ} + sup max y 0 + y z ? s z , ?s z
ˆ z ˆ?W z ?W

,

sup
ˆ z ˆ?W

1 0 1 y +yz ˆ + 2 2

(y 0 + y z ˆ)2 + y ?y .

21

Using these bounds, we obtain the following approximate solution to problem (8), 1 ? ZP W LDR (1) = min ? (r0 , r ) + ? 1 ? ?B +
N L 0 ? ckm yk + ?dkm ? s0 k , ckm y k ? sk k=1 m=1 K N k=1 0 0 s0 k +t ?r , N k=1

sk + t ? r

+ +
m=1

0 ? pm x0 M + ?qm ? ?pm ? ? t , pm xM ? t 0 0 ?(i,j ) pK ? ?x0 ? xj + xi ? y ?(i,j ) j + xi ? y?(i,j ) , ?e (i,j )?E M

+ +
(i,j )?E N

pK ?

? x0 j

+

x0 i , ?xj

+ xi +
k=1

pK ? ?x0 k , ?xk

(12)

+
k=1 0

0 0 ckL ? ?yk , ?y k + (pK ? ck1 )+ ? yk ? ?uk , y k M N

0 0 variables ?, t , t, r0 , r , {x0 i , xi }i=1 , {yk , y k , sk , sk }k=1

s.t.

xij = 0, ykj = 0,

?i ? [ M ] , ?j ? / Iix ?k ? [N ] , ?j ? / Iiy .

From the structure of ? (y 0 , y ), for some decision variable w, the constraint ? (y 0 , y ) ? w can be expressed in terms of second order conic constraints. Such problems are both theoretically and computationally tractable (Nesterov and Nemirovski 1994).
? In order to establish the relative strength of the various bounds on Z0 described above, we

need the following two preliminary results. Lemma 2 Consider a scalar-valued LDR with an arbitrary information index set, y ? L(1, N, I ). 1. If y (z ˜) = y 0 + y z ˜ ? 0, then ? 1 (y 0 , y ) ? 0. 2. If y (z ˜) = y 0 + y z ˜ ? 0, then ? 1 (y 0 , y ) ? sup y 0 + y z ˆ .
ˆ z ˆ?W

Proof. Applying the de?nition of ? 1 (y 0 , y ), the assumption y (z ˜) ? 0 implies that ? 1 (y 0 , y ) =
s?

inf

N

ˆ} + sup {?s z } . Since s = 0 is feasible, we have ? 1 (y 0 , y ) ? 0. Similarly, sup {s z
ˆ z ˆ?W z ?W

the assumption y (z ˜) ? 0 implies that ? 1 (y 0 , y ) = inf
s?

N

sup {s z ˆ} + sup y 0 + y z ? s z
ˆ z ˆ?W z ?W

.

Choosing s = y yields ? 1 (y 0 , y ) ? sup y 0 + y z ˆ .
ˆ z ˆ?W

Lemma 3 Consider a scalar-valued LDR with an arbitrary information index set, y ? L(1, N, I ). If y (z ˜) = y 0 + y z ˜ ? 0, then ? (y 0 , y ) = 0. 22

Proof. y (z ˜) ? 0 implies sup EP However,
x ,x P?F

y0 + y z ˜

+

= 0. Using Proposition 1, we have ? (y 0 , y ) ? 0.

? (y 0 , y ) = min ? 1 x0 , x + ? 2 y 0 ? x0 , y ? x 0 ? ? 1 (y 0 , y ) + ? 2 (0, 0) = ? 1 (y 0 , y ) ? 0. Hence, we have ? (y 0 , y ) = 0. We are now ready to prove our main result about the relative strength of the various bounds.
? ? ? ZP Theorem 5 The following inequality holds: Z0 W LDR ? ZP W LDR ? ZLDR . (1)

(choosing (x0 , x) = (y 0 , y )) (by positive homogeneity of ? 2 (·)) (by Lemma 2, part 1)

Proof. Consider a feasible solution to (6). Observe that this solution is also feasible in (12). The objective of (12) is simpli?ed by iteratively applying Lemma 3 to each constraint of model (6), which causes all the ? (·) terms except the ?rst to vanish. Thus, we have 1 ? ZP W LDR ? ? (r0 , r ) ? ? 1 (r0 , r ) ? sup r0 + r z ˆ
ˆ z ˆ?W (1)

(from the proof of Lemma 3) (by Lemma 2, part 2)

= 1 ? ZLDR . Therefore, we have ZP W LDR ? ZLDR .
? The other inequality ZP W LDR ? ZP W LDR follows, ?rst from applying subadditivity of the (1) (1)

sup EP (·) operator to obtain
P?F

sup EP (y (z ˜))? + (pK ? c1 ) (y (z ˜) ? ?u)+
P?F

+

? sup EP (y (z ˜))? + sup EP (pK ? c1 ) (y (z ˜) ? ?u)+
P?F P?F

+

in the objective of (8), and second from iteratively applying Proposition 1 to bound the respective terms in the objective of (8) from above. This implies that we can obtain decision rules which perform better than the simpler LDRs, at the cost of additional computational complexity in solving (12). This improvement arises from two sources. First, the piecewise-linear rules explore a larger space of possible decisions. Second, by using the piecewise-linear rules and bounds developed, we can incorporate correlation information between di?erent activities, through the covariance matrix ? which appears in ? 2 (·). 23

4.5.3

Crashing decisions for the example project

For the example project described in Section 4.4, after solving model (6) and applying Theorem 1, we obtain the LDR for crashing amounts as y L (z ˜) = [0, z ˜1 , 0, 1] . Similarly, solving model (12) and applying Theorem 3, the PWLDR solution can be simpli?ed to y P (z ˜) = [0, 0.215 + 0.785˜ z1 , 0, 1] . We compare the performance of the LDR solution with the other heuristics shown in Table 1. We simulate the uncertain activity times as beta random variables (Kerzner 2009) with the given statistics. In particular, we use the mixture approach of Michael and Schucany (2002) to simulate the beta-distributed correlated activity times. In our simulation, we ?rst generate the activity time, and then subtract the crash amount from the activity time. If the reduced activity time is negative, then we assign it to zero; however, we still include the full crash amount in our cost computation. This convention models situations where outsourced work components are billable even if they are not fully utilized (Greaver 1999). We perform 10,000 independent simulation runs using each strategy to compute the project success probability, that is, the probability that the sum of the crashing and penalty costs is within the overall project budget. These probabilities are summarized in Table 2. Strategy NOCRASH MINCRASH MCSIM LDR PWLDR Probability 38.76% 57.38% 78.70% 86.10% 86.92%

Table 2: Probability of Meeting Overall in the Example. The results illustrate that, for this simple example, the solutions obtained from our LDR and PWLDR strategies yield higher success probabilities than all the previous methods. More extensive computational tests in Section 5 further demonstrate that the bene?ts in project performance which result from the use of our decision rules are typically substantial. 4.5.4 An alternative bound

Model (12) is a natural approximation of (8), but not a unique one. As an alternative, we can apply a di?erent bound on some terms to construct a new approximation. Speci?cally, one of the worst-case expectation terms in the objective of (8) has an argument that can be decomposed 24

into the sum of piecewise-linear convex terms, as follows. sup EP cL (y (z ˜))? + d (y (z ˜) ? ?u)+
P?F N P?F N

= sup EP
k=1

max {?ckL yk (z ˜), 0, dk yk (z ˜) ? ?uk dk }

?
k=1

sup EP (max {?ckL yk (z ˜), 0, dk yk (z ˜) ? ?uk dk }).
P?F

We make use of the following result. Proposition 2 (Natarajan et al. 2010) ˆ , EP (z If F = P : P (z ˜ ? W ) = 1, z ˆ = EP (z ˜) ? W ˜z ˜) = ?+z ˆz ˆ
K

, and given parameters a, b ?

for some positive integer K ? 2, then min ? 1 (w, x) + ? 2 ay 0 + b ? w, y ? x ? sup EP max ai y 0 + y z ˜ + bi
w,x P?F i?[K ]

? (y 0 , y ; a, b) where

,

? 1 (w, y ) ? 2 (w, y )

s?

inf

N

ˆ z ˆ?W

sup {s z ˆ} + sup max wi + (ai y ? s) z
z ?W i?[K ]

,

s,t,v ?

inf

s + max a2 ˆ i v + ai t + wi + sup y z
i?[K ] 2 ˆ z ˆ?W

s.t.

y ?y + t ? 4sv v ? 0.

This bound was shown by Natarajan et al. (2010) to be exact if F is de?ned by mean and support information, or if F is de?ned by mean and covariance information. Using this bound, and de?ning the set of parameters ?k ? [N ] as ak bk [0, ?ckL , dk ] [0, 0, ?uk dk ] , (13)

25

we obtain the following alternative approximation of (8), 1 ? ZP W LDR = min ? (r0 , r ) + ? 1 ? ?B +
N L 0 ? ckm yk + ?dkm ? s0 k , ckm y k ? sk k=1 m=1 K (2) N k=1 0 0 s0 k +t ?r , N k=1

sk + t ? r

+ +
m=1

0 ? pm x0 M + ?qm ? ?pm ? ? t , pm xM ? t 0 0 ?(i,j ) pK ? ?x0 ? xj + xi ? y ?(i,j ) j + xi ? y?(i,j ) , ?e (i,j )?E 0 pK ? ?x0 j + xi , ?xj + xi (i,j )?E N M 0 yk , y k ; ak , bk M

+ + +
k=1

(14)

?

+
k=1

pK ? ?x0 k , ?xk
N

0 0 variables ?, t0 , t, r0 , r , {x0 i , xi }i=1 , {yk , y k , sk , sk }k=1

s.t.
(2)

xij = 0 ?i ? [M ] , ?j ? / Iix ykj = 0 ?k ? [N ] , ?j ? / Iiy .

The new bound ZP W LDR can be compared with the earlier bounds, as follows.
? ? Theorem 6 The following inequality holds: Z0 ? ZP W LDR ? ZP W LDR ? ZLDR . 0 0 Proof. First, for any k ? [N ], if ?yk , y k such that ?z ? W , 0 ? yk + y k z ? ?uk then, for (ak , bk ) (2)

as de?ned in (13), we have
0 0 ? (yk , y k ; ak , bk ) ? ? 1 (ak yk + bk , y k ; ak )

= = =

s?

inf

N

ˆ z ˆ?W

0 sup {s z ˆ} + sup max aki yk + y z + bki ? s z z ?W i?[K ] 0 0 sup {s z ˆ} + sup max 0, ?ckL yk + y z , dk yk + y z ? ?uk z ?W

s?

inf

N

?sz

ˆ z ˆ?W

s?

inf

N

sup {s z ˆ} + sup {s z }
ˆ z ˆ?W z ?W

? 0.
0 Since Proposition 2 implies that ? (yk , y k ; ak , bk ) ? 0, we conclude that under these conditions, 0 ? (yk , y k ; ak , bk ) = 0. By an identical argument to that in Theorem 5, any feasible solution to

(6) is also feasible in (14) with matching objective, and hence ZP W LDR ? ZLDR .
? Similarly, applying Propositions 1 and 2 to (8) yields ZP W LDR ? ZP W LDR . (2)

(2)

26

4.6

Rolling horizon crashing model

To make the most e?ective use of periodic information updates as well as activity completion time information, we develop a rolling horizon crashing model. As described in Section 2.3, an updated project report is received in each period. Then the project manager updates the model parameters based on the realized uncertainties, and solves problem (4). The solution follows an iterative procedure. The iteration number is denoted by t ? 0. For notational simplicity, in the tth period, we de?ne the vector of un?nished fraction of each activity as f (t) ? [0, 1]N . For example, if activity k has been completed (respectively, has not started) prior to the tth report, then fk = 0 (resp., fk = 1). Also, we denote by g (t) ? [0, 1]N the vector of fractional work completed for each activity in period t. Accordingly, we have g (t) = f (t?1) ? f (t) .
(t) (t) (t)

(15)

Furthermore, we de?ne by D f the N × N diagonal matrix with f (t) on the main diagonal, and
t) (t) D( on the main diagonal. The solution procedure uses g the N × N diagonal matrix with g

the following algorithm. Algorithm CrashPlan Step 1. Initialize t = 0, F(0) = F, f (0) = e, u(0) = u, ? (0) = ?, B (0) = B . Step 2. Solve the following problem:
N

Z0 (t)? =

x(t) (·),y (t) (·),?

max s.t.

P?F(t)

inf EP
(t)

min 1, ?B (t) ? ?
k=1 (t) (t)

ck yk (z ˜) ? ?p xM (z ˜) ? ? (t)
+

(t)

(t)

xj (z ˜) ? xi (z ˜) ? z ˜k ? yk (z ˜) 0 ? y (z ˜) ? u (t) x (z ˜) ? 0 (t) xi ? Y (1, N, Iix ) (t) y yk ? Y (1, N, Ik ) ??0
(t) (t)

?(i, j ) ? E , k = ?(i, j )

?i ? [M ] ?k ? [N ] (16)

to obtain new decision rules x(t) (·), y (t) (·) for start times and crashing amounts, which are implemented in period t + 1. Step 3. Wait to receive the tth periodic project report. 27

Step 4. Obtain f (t) from the report. Compute g (t) from (15). Also, compute the matrices D f
(t) t) and D ( = 0, then stop. Otherwise, continue with Step 5. g . If f

(t)

Step 5. Update the family of uncertainty distributions using F(t+1) = P : EP (z ˜) = D f µ, EP (z ˜z ˜ ) = D f (? + µµ ) D f , P z ˜ ? W (t+1) = 1 , where W (t+1) = ??
N (t) (t) (t)

: ?i =

zi 1{f >0} ?i ? [N ] , z ? W . fi i

(17)

Step 6. Update the vector of remaining crash limits using
t) (t) u(t+1) = u(t) ? D ( ˜). g y (z

(18)

Step 7. Update the remaining time and budget, respectively, using ? (t+1) = ? ? tT,
N

and ck yk (z ˜) · 1 ? fk
(t) (t)

B (t+1) = B ?
k=1

.

(19)

Step 8. If f (t) = 0, then stop. Otherwise, set t = t + 1, and go to Step 2. In Step 5, the family of uncertainty distributions F(t+1) is updated using a heuristic that proportionately scales the distributional properties. In particular, when W has the simple but common structure of a box, W = z ?
N

: z l ? z u for some vectors of lower and upper limits

z l and z u , the support update simpli?es to W (t+1) = ? ?
N

: Df (t) z l ? ? ? Df (t) z u .

Notice that this is e?ectively an assumption that the uncertainty associated with an activity is uniform across the activity, which parallels a standard assumption in earned value analysis where resources for each activity are used at a uniform rate (Klastorin 2004). Regarding consumption of the budget by crashing cost, we assume that crashing activity and the accrual of crashing cost occur at a constant rate within each period. This assumption is consistent with the situation where crashing involves assigning additional resources to an activity, with resources having constant usage and cost rates throughout the period (Harrison and Lock 28

2004). Hence, the amount of crashing used in the tth period can be written as Dg (t) y (t) (z ˜), which explains the formula used for updating u(t) in Step 6. We note two points regarding our rolling horizon model. First, the crashing decisions are, by construction, only functionally dependent on fully completed activities. Consequently, in each update period, the indices k where gk > 0 are exactly those for which yk (z ˜) is measurable, with respect to the ?ltration generated by the activity times that are known by period t. As a result, in (18) and (19), the crash limits and budget are always updated with deterministic quantities. Second, in the later update periods, if an activity k is fully complete, (17) gives us z ˜k = 0 almost surely. In this case, the crashing decision in period t for activity k is redundant, ˜) since the z ˜k ? yk (z
(t) + (t)

term in (16) evaluates to identically zero. The redundant decision rules

are retained in the model for ease of exposition. For any given sample path comprising the realization of uncertain activities times, the rolling horizon approach leads to better project performance with respect to the probability of meeting the total project budget than model (4). Intuitively, the rolling horizon approach uses periodic information updates to improve the crashing decisions iteratively. In Section 5, we use a computational study to estimate the extent of this improvement.

5

Computational Study

In this section, we summarize the results of our computational study on the crashing strategies described above. A discussion of our data generation procedure and experimental design is followed by a summary of our computational results. Our data set consists of 1,000 sets of activity times for each of 500 networks, which are randomly generated as follows. Using the guidelines of Hall and Posner (2001), (a) we generate a wide range of parameter speci?cations, (b) the data generated is representative of real world scenarios, and (c) the experimental design varies only the parameters that may a?ect the analysis. Also, we use the procedure described by Hall and Posner (2001) within each project network to generate precedence constraints that have an expected density, or order strength (OS), which meets a prespeci?ed target and is also uniform across the network. We use the random network generator RanGen1 of Demeulemeester et al. (2003) to generate networks with a wide range of network topologies. The authors show that the networks generated

29

span the full range of problem complexity with a prespeci?ed OS. In our experiment, we generate 500 project instances that contain 10 activities each, with an OS of 0.5. We then apply a series of transformations to each network to ?t it into our modeling framework. First, we convert the generated networks from the AON convention to the AOA convention, using the technique of Cohen and Sadeh (2006). Second, to introduce stochasticity into the project networks, we use the activity times generated by RanGen1 as mean activity times, and assume that each activity has support on the interval [0.2t, 1.8t], where t represents the mean activity time of a given activity. We conduct two sets of tests, ?rst where the marginal distribution of each activity is uniform, and second where the marginal distribution of each activity is ? (5, 5) distributed (Kerzner 2009). The diagonal entries of the covariance matrix are therefore implicitly de?ned by the choice of distribution. For each marginal distribution, we test three subcases, (a) where the activity times are independent (INDEP), (b) where the activity times are randomly correlated with a diagonal covariance matrix on average (SYMCORR), and (c) where the activity times are randomly correlated with a nondiagonal covariance matrix on average (ASYMCORR). Next, we choose problem parameters to ensure that our project instances are relevant to the project crashing problem. The crash cost rate for each activity is generated from a uniform distribution between 0 and 2, and the penalty cost rate of each project is set to the number of activities in the network. The justi?cation for this is that we expect that, as the project network grows in size, the penalty cost rate should increase approximately linearly, to re?ect the impact of a delay. To choose the project completion time ? , we make the PERT Assumptions A1–A3, and choose ? so that the project is delayed with probability 95%. We then choose B such that, according to the PERT assumptions, we have a 70% probability of project failure if no crashing is used. For simplicity, we assume that each activity can be completely crashed if desired. For each project instance, we randomly generate 1,000 sets of activity times drawn from our assumed distribution. In the experiment with correlated activities, we use the standard Gaussian copula technique (Nelsen 1999) to generate the correlated activities. We compare various crashing strategies. First, we consider the NOCRASH strategy, which performs no crashing and saves all of the project budget to pay the penalty costs in the event of late project completion. Second, we consider the MINCRASH strategy that is described in Section 4.4. Third, we consider a benchmark strategy MCSIM, which computes the crashing decisions using model (5), based on 30

the Utopian assumption of complete knowledge of the distribution of activity times. Next, we use our LDR and PWLDR decision rules to compute the crashing decisions. We record the results for the PWLDR decision rules under the bounds ZP W LDR and ZP W LDR separately as PWLDR(1) and PWLDR(2) , respectively. Finally, we compare the crashing decisions with the rolling horizon method using the LDR decision rule to solve model (16) with T = 0.7? . The choice of a relatively large period ? was made to keep the number of information update periods per network after crashing to be between two and three, for computational reasons. Unlike in the other methods, here we solve for individual crashing decisions on individual networks, hence having a large number of periods per network is computationally expensive. Models for each strategy are implemented using ROME (Goh and Sim 2009), which is a MATLAB-based modeling language for robust optimization problems, and solved using CPLEX on a 64-bit workstation running a Linux operating system. The average time taken for computing the crashing decisions in a network instance is approximately 0.5 seconds for MINCRASH, 3.5 seconds for MCSIM, and 5.0 seconds for both the LDR and PWLDR strategies. Table 3 summarizes the average probability of meeting the overall project budget over the 500 × 1,000 = 500,000 randomly generated instances within each test, as well as the 95% con?dence intervals based on a normal approximation, for the various crashing strategies. In the table, p represents the average project success probability, and p ± represents the 95% con?dence interval for the probability of project success. Also, for a given project network, we compute the expected + 1 ˜ ? B , where ? ˜ represents E ? budget overrun as a fraction of the project budget, using L = B the total crashing and penalty cost associated with each strategy. We record L ± , the 95% con?dence interval for each strategy, in Table 4. The results for uniform marginal activity time distributions show that crashing by any strategy increases the average probability of project success and reduces the expected budget overrun in each of the six subcases. Even the benchmark PERT-based MINCRASH strategy substantially improves the success probability over NOCRASH. The MCSIM results are provided as a Utopian benchmark, since they assume knowledge of the activity time distribution. In practice, however, this assumption is unrealistic (Williams 2003). By contrast, our LDR and PWLDR strategies only use general, descriptive statistics of the activity times. Both these strategies greatly outperform the NOCRASH and MINCRASH strategies. Moreover, the PWLDR strategy also performs 31
(1) (2)

Distribution

Strategy NOCRASH MINCRASH MCSIM LDR PWLDR(1) PWLDR(2) RH(LDR) NOCRASH MINCRASH MCSIM LDR PWLDR(1) PWLDR(2) RH(LDR)

Uniform

? (5, 5)

INDEP p% % 13.27 2.97 50.55 4.38 94.45 2.01 92.26 2.34 92.84 2.26 96.22 1.67 86.97 2.95 18.50 3.40 67.63 4.10 96.53 1.61 76.56 3.71 96.62 1.58 99.07 0.84 86.41 3.00

SYMCORR p% % 12.70 2.92 50.57 4.38 93.97 2.09 90.75 2.54 93.94 2.09 97.62 1.34 86.07 3.03 17.92 3.36 68.40 4.08 96.60 1.59 74.31 3.83 98.89 0.92 99.43 0.66 85.01 3.13

ASYMCORR p% % 39.00 4.28 63.03 4.23 90.45 2.58 91.94 2.39 97.48 1.37 97.77 1.30 81.83 3.38 36.42 4.22 67.68 4.10 93.32 2.19 73.55 3.87 98.64 1.01 98.45 1.08 82.78 3.31

Table 3: Probability of Meeting Overall Project Budget in 10-Activity Networks.

Distribution

Strategy NOCRASH MINCRASH MCSIM LDR PWLDR(1) PWLDR(2) RH(LDR) NOCRASH MINCRASH MCSIM LDR PWLDR(1) PWLDR(2) RH(LDR)

Uniform

? (5, 5)

INDEP L% % 103.87 6.72 26.48 3.34 1.20 0.59 1.06 0.42 0.98 0.42 0.46 0.28 4.38 1.31 93.06 6.88 17.91 3.16 1.11 0.69 6.69 1.42 1.94 1.27 0.18 0.25 3.21 1.00

SYMCORR L% % 104.03 6.70 27.32 3.46 1.31 0.61 1.24 0.47 0.76 0.38 0.29 0.23 4.71 1.37 92.56 6.89 18.09 3.30 1.14 0.73 7.24 1.49 0.30 0.37 0.13 0.24 3.53 1.03

ASYMCORR L% % 110.93 10.66 39.99 5.74 3.57 1.27 1.55 0.57 0.38 0.28 0.30 0.25 8.21 1.96 107.63 10.79 35.84 6.18 4.03 1.75 7.19 1.44 0.25 0.30 0.38 0.40 4.06 1.10

Table 4: Normalized Expected Budget Overrun in 10-Activity Networks.

32

competitively with the Utopian benchmark MCSIM in general, and outperforms it when the activity time covariance matrix is nondiagonal. The PWLDR(2) strategy has the best performance among the linear-based decision rules, exceeding even the Utopian MCSIM benchmark in all three subcases. In the presence of marginally ? -distributed activity times, the LDR strategy shows the sharpest drop in performance. Since the LDR strategy ignores variance information, it provides over-conservative results when applied to activity times that have a less ?at distribution. Chen et al. (2008) report similar results. Nonetheless, the LDR strategy still greatly outperforms the NOCRASH and MINCRASH strategies. Both the PWLDR strategies outperform all others when activity times are correlated, and the PWLDR(2) strategy provides over 98% probability of meeting the overall project budget, and less than 0.5% expected budget overrun. Moreover, the PWLDR strategies typically improve in performance in the presence of either form of correlation, since they explicitly incorporate covariance information. From Tables 3 and 4, we observe that the rolling horizon strategy improves on the LDR strategy for ? (5, 5)-distributed activity times, while it performs less well for uniformly-distributed activity times. Since the rolling horizon strategy uses more information about the uncertainties than the LDR strategy, we would naturally expect it to improve on the LDR strategy in all instances. However, since we use approximations in both cases to solve models (4) and (16), the RH(LDR) strategy does not dominate the LDR strategy. This is especially true where the static LDR strategy already performs relatively well, as with the marginally uniform activity times. We repeat our computational tests for a set of larger networks with 20 activities each. Due to the longer computation times required to solve each network, we use a smaller test set of 50 randomly generated networks, and randomly generate 1,000 sets of activity times for each network, with the exception of the RH(LDR) strategy, where due to longer computation times, we use 50 sets of activity times. We summarize our results in Tables 5 and 6. Tables 5 and 6 show that our results for 20-activity networks are similar to those for 10activity networks, but even stronger. The LDR and PWLDR strategies typically outperform even the Utopian MCSIM strategy, and greatly outperform the PERT-based strategies MINCRASH and NOCRASH. Moreover, a comparison of results for the 10-activity and 20-activity networks shows that project success probabilities and expected budget overruns for our LDR and PWLDR strategies improve substantially in the larger networks. The most signi?cant im33

Distribution

Strategy NOCRASH MINCRASH MCSIM LDR PWLDR(1) PWLDR(2) RH(LDR) NOCRASH MINCRASH MCSIM LDR PWLDR(1) PWLDR(2) RH(LDR)

Uniform

? (5, 5)

INDEP p% % 8.40 7.69 44.48 13.77 99.36 2.21 100.00 0.00 100.00 0.00 100.00 0.00 98.72 3.12 15.59 10.05 63.22 13.37 99.08 2.64 97.20 4.58 99.53 1.90 99.89 0.90 97.76 4.10

SYMCORR p% % 8.19 7.60 43.79 13.75 99.38 2.18 100.00 0.00 100.00 0.00 100.00 0.00 96.68 4.97 16.22 10.22 64.44 13.27 99.08 2.65 95.65 5.65 99.97 0.46 99.97 0.46 99.88 0.96

ASYMCORR p% % 40.23 13.59 61.33 13.50 96.91 4.80 100.00 0.00 100.00 0.00 100.00 0.00 97.60 4.24 38.62 13.50 66.01 13.13 96.70 4.95 96.11 5.36 99.81 1.19 99.68 1.56 99.56 1.83

Table 5: Probability of Meeting Overall Project Budget in 20-Activity Networks.

Distribution

Strategy NOCRASH MINCRASH MCSIM LDR PWLDR(1) PWLDR(2) RH(LDR) NOCRASH MINCRASH MCSIM LDR PWLDR(1) PWLDR(2) RH(LDR)

Uniform

? (5, 5)

INDEP L% % 116.16 20.98 29.95 10.99 0.07 0.32 0.00 0.00 0.00 0.00 0.00 0.00 0.20 0.74 98.86 21.77 20.61 10.53 0.22 0.86 0.14 0.29 0.01 0.07 0.00 0.02 0.65 1.26

SYMCORR L% % 116.32 20.86 32.33 11.72 0.07 0.36 0.00 0.00 0.00 0.00 0.00 0.00 1.06 2.02 97.83 22.24 20.11 10.75 0.26 1.02 0.38 0.55 0.00 0.00 0.00 0.00 0.01 0.15

ASYMCORR L% % 125.08 38.71 50.34 22.25 0.58 1.19 0.00 0.00 0.00 0.00 0.00 0.00 0.68 1.51 120.23 39.50 45.89 24.20 1.85 3.65 0.57 0.88 0.02 0.12 0.03 0.16 0.00 0.02

Table 6: Normalized Expected Budget Overrun in 20-Activity Networks.

34

provement occurs for the LDR strategy where the activity times are ? (5, 5)-distributed. Since the LDR and PWLDR strategies explicitly take advantage of complex precedence structures, we expect that their performance improves as a function of network size. In summary, our results show that both the LDR and PWLDR crashing strategies improve substantially over benchmark PERT-based strategies. Our strongest recommendation is for the use of the PWLDR strategy with the PWLDR(2) bound, which provides very close to 100% probability of overall project success and very close to zero expected budget overrun across a wide variety of data sets. Where activity time variances are unavailable or estimates are unreliable, in which case we cannot use the PWLDR crashing strategy, the RH(LDR) strategy is a valuable alternative.

6

Concluding Remarks

In this paper, we study the problem of project planning with crashing, under uncertainty. We consider a satis?cing objective that models penalties for completing a project after a deadline as well as the total cost of crashing activities, relative to an overall project budget. Two solution procedures are proposed. The ?rst procedure develops a linear decision rule and two piecewiselinear decision rules that are computed at the project planning stage and that use updated information about activity completion times during project execution. In addition, we describe a rolling horizon procedure that takes advantage of periodic information updates during the execution of activities, to determine new decision rules. Our results provide several insights that project managers will ?nd useful. Based on our computational results, our PWLDR strategies recommend crashing decisions that deliver 20-activity projects on time and on budget with over 99% probability, compared to less than 45% probability using traditional PERT-based strategies for the same benchmark problems. Moreover, our strategies provide expected budget overruns that average less than 0.1%, compared with over 25% using PERT-based strategies. These results are achieved without knowledge of the speci?c activity time distribution. The simpler decision strategy, LDR, also provides results that substantially outperform traditional strategies. Additionally, the LDR strategy can be implemented in a rolling horizon format for further improvement, which is especially advantageous when activity time variances are small. Importantly, the performance of our crashing strategies appears

35

to improve with project size. The issue of activity time correlation that severely complicates project scheduling using traditional strategies is easily handled, which provides an incentive to project managers to document correlation where possible. Finally, while PERT is widely used in practice because of its ease of implementation, the availability of ROME software (and Sim 2009b) makes the implementation of our methodology similarly easy. Two important special cases of the model are relevant. First, the crash cost budget may be viewed deterministically if there is no ?exibility in the budget allocated to a project. In this case, the project management problem is to make crashing decisions that maximize the probability of completing the project by the given time ? , subject to the deterministic budget constraint. Second, the project completion time may be viewed deterministically. This is relevant where the client imposes a ?rm deadline on the project delivery time, beyond which the project contract is void. In this case, the project management problem is to make crashing decisions that maximize the probability of completing the project within the given budget B , subject to the deterministic completion time constraint. Both these applications can be modeled and solved using the methodology described above in our paper. However, the special structure of these models may enable more e?cient solution procedures and more accurate approximations. Several extensions of the general problem considered in this paper are also relevant for future research. The ?rst extension is the modeling of discrete crashing resources, for example the outsourcing of an entire activity, within a robust optimization framework. Related to this is the modeling of discrete crash cost functions for activities, for example when adding an extra production shift in order to meet a deadline on project completion time. Another practical generalization is the modeling of discrete penalty functions for project completion time, such as sometimes arise in large public construction projects. Unfortunately, many projects su?er from a loss of resources during their execution stage, due to competition from other projects. Hence, a further useful generalization is the modeling of uncertainty about the availability of resources for crashing. Still more generally, the study of multiple projects that share crashing resources would be a valuable contribution. In conclusion, we hope that our work will encourage the development of e?ective robust optimization approaches to these important project management problems.

36

Acknowledgments
The work of the ?rst author is supported by the Kaneko/Lainovic International Fellowship. The work of the second author is supported by the Summer Fellowship Program and the Center for Operational Excellence of the Fisher College of Business, The Ohio State University. The work of the third author is supported by the Singapore - MIT Alliance, and by National University of Singapore academic research grant R-314-000-068-122. Helpful comments on an earlier draft were provided by Zhuoyu Long and Jin Qi (National University of Singapore).

37

References
Alexander, G.J., A.M. Baptista. 2004. A comparison of VaR and CVaR constraints on portfolio selection with the mean-variance model. Management Science 50 1261–1273. Barbour, A.D., J.L. Jensen. 1989. Local and tail approximations near the Poisson limit. Scandinavian Journal of Statistics 16 75–87. Ben-Tal, A., A. Goryashko, E. Guslitzer, A. Nemirovski. 2004. Adjustable robust solutions of uncertain linear programs. Mathematical Programming 99 351–376. Ben-Tal, A., A. Nemirovski. 1998. Robust convex optimization. Mathematics of Operations Research 23 769–805. Bertsimas, D., M. Sim. 2004. The price of robustness. Operations Research 52 35–53. Black, K. 2008. Business Statistics: Contemporary Decision Making, 5th edition. Wiley, Hoboken, NJ. Bowman, R.A. 1994. Stochastic gradient-based time-cost tradeo?s in PERT networks using simulation. Annals of Operations Research 53 533-551. Brown, D.B., M. Sim. 2009. Satis?cing measures for analysis of risky positions. Management Science 55 71–84. Brown, D.B., E. De Giorgi, M. Sim. 2009. A satis?cing alternative to prospect theory. Working Paper, Fuqua School of Business, Duke University, Durham, NC, November. Chaos Report. 2006. The Standish Group, Boston, MA. Chen, W., M. Sim. 2009. Goal driven optimization. Operations Research 57 342–357. Chen, X., M. Sim, P. Sun, J. Zhang. 2008. A linear decision-based approximation approach to stochastic programming. Operations Research 56 344–357. Cohen, I., B. Golany, A. Shtub. 2007. The stochastic time-cost tradeo? problem: A robust optimization approach. Networks 49 175–188. Cohen Y., A. Sadeh. 2006. A new approach for constructing and generating AOA networks. Journal of Engineering, Computing and Architecture 1 1–15. Demeulemeester, E., M. Vanhoucke, W. Herroelen. 2003. A random network generator for activity-on-the-node networks. Journal of Scheduling 6 13–34. El-Ghaoui, L., F. Oustry, H. Lebret. 1998. Robust solutions to uncertain semide?nite programs. SIAM Journal on Optimization 9 33–52. Ellsberg, D. 1961. Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics 75 643–669.

Fleming, Q., J. Koppelman. 2005. Earned Value Project Management, 3rd edition. Project Management Institute, Newtown Square, PA. Goh, J., M. Sim. 2009. Robust optimization made easy with ROME. Working Paper, NUS Business School, National University of Singapore. Goh, J., M. Sim. 2010. Distributionally robust optimization and its tractable approximations. Operations Research, to appear. Greaver, M.F., II. 1999. Strategic Outsourcing: A Structured Approach to Outsourcing Decisions and Initiatives, 1st edition. AMACOM, New York City, NY. Gutjahr, W.J., C. Strauss, E. Wagner. 2000. A stochastic branch-and-bound approach to activity crashing in project management. INFORMS Journal on Computing 12 125–135. Hall, N.G., M.E. Posner. 2001. Generating experimental data for computational testing with machine scheduling applications. Operations Research 49 854–865. Harrison, F.L., D. Lock. 2004. Advanced Project Management: A Structured Approach, 4th edition. Gower Publishing, Aldershot, U.K. Herroelen, W., R. Leus. 2005. Project scheduling under uncertainty: Survey and research potentials. European Journal of Operational Research 165 289–306. Hughes, M.W. 1986. Why projects fail: The e?ects of ignoring the obvious. Industrial Engineering, April, 14–18. Kelley, J.E., Jr., M.R. Walker. 1959. Critical-path planning and scheduling. Proceedings, AFIPS Joint Conferences, 160-173. Kerzner, H. 2009. Project Management: A Systems Approach to Planning, Scheduling, and Controlling, 10th edition. Wiley, Hoboken, NJ. Klastorin, T.D. 2004. Project Management: Tools and Trade-O?s, 1st edition. Wiley, Hoboken, NJ. Kwak, H.K., L. Ingall. 2007. Exploring Monte Carlo simulation applications for project management. Risk Management 9 44–57. Michael, J.R., W.R. Schucany. 2002. The mixture approach for simulating bivariate distributions with speci?ed correlations. The American Statistician 56 48–54. Might, R.J., W.A. Fischer. 1985. The role of structural factors in determining project management success. IEEE Transactions on Engineering Management 32 71–77. Natarajan, K., M. Sim, J. Uichanco. 2010. Tractable robust expected utility and risk models for portfolio optimization. Mathematical Finance, to appear. Nelsen, R.B. 1999. An Introduction to Copulas. Springer, New York City, NY.

Nesterov, Y., A. Nemirovski. 1994. Interior-point polynomial algorithms in convex programming. SIAM Studies in Applied Mathematics 13. SIAM, Philadelphia, PA. 1000ventures.com. 2009. Change management. Available at:http://1000ventures.com/business guide/crosscuttings/change management.html Pells, D.L. 2008. Seven good reasons for the rapid growth of project management in IT and why that trend will continue. PM World Today X, June, 1–14. Rockafellar, R.T., S. Uryasev. 2000. Optimization of conditional value-at-risk. The Journal of Risk 2 21–41. Schonberger, R.J. 1981. Why projects are “always” late: A rationale based on manual simulation of a PERT/CPM network. Interfaces 11 66–70. Siemens, N. 1971. A simple CPM time-cost tradeo? algorithm. Management Science 17 354–363. Soyster, A.L. 1973. Convex programming with set-inclusive constraints and applications to inexact linear programming. Operations Research 21 1154–1157. U.S. Department of Defense. 1962. PERT cost systems design. DoD and NASA Guide. U.S. Navy. 1958. PERT summary report, phase I. Defense Technical Information Center, AD-735 902, Special Projects O?ce, Bureau of Naval Weapons. Value Based Management.net. 2009. Product life cycle - Industry maturity stages. Available at:http://www.valuebasedmanagement.net/methods product life cycle.html Williams, T. 2003. The contribution of mathematical modeling to the practice of project management. IMA Journal of Management Mathematics 14 3–30.



doc_911686473.pdf
 

Attachments

Back
Top