Having a robustly designed supply chain network is one of the most effective ways to hedge against network disruptions because contingency plans in the event of a disruption are often significantly limited. In this article, we study the facility reliability problem: how to design a reliable supply chain network in the presence of random facility disruptions with the option of hardening selected facilities. We consider a facility location problem incorporating two types of facilities, one that is unreliable and another that is reliable (which is not subject to disruption, but is more expensive). We formulate this as a mixed integer programming model and develop a Lagrangian Relaxation-based solution algorithm. We derive structural properties of the problem and show that for some values of the disruption probability, the problem reduces to the classical uncapacitated fixed charge location problem. In addition, we show that the proposed solution algorithm is not only capable of solving large-scale problems, but is also computationally effective. (C) 2009 Wiley Periodicals, Inc. Naval Research Logistics 57: 58-70, 2010

A defender wants to detect as quickly as possible whether some attacker is secretly conducting a project that could harm the defender. Security services, for example, need to expose a terrorist plot in time to prevent it. The attacker, in turn, schedules his activities so as to remain undiscovered as long as possible. One pressing question for the defender is: which of the project's activities to focus intelligence efforts on? We model the situation as a zero‐sum game, establish that a late‐start schedule defines a dominant attacker strategy, and describe a dynamic program that yields a Nash equilibrium for the zero‐sum game. Through an innovative use of cooperative game theory, we measure the harm reduction thanks to each activity's intelligence effort, obtain insight into what makes intelligence effort more effective, and show how to identify opportunities for further harm reduction. We use a detailed example of a nuclear weapons development project to demonstrate how a careful trade‐off between time and ease of detection can reduce the harm significantly.

We consider the problem of optimally maintaining a stochastically degrading, single‐unit system using heterogeneous spares of varying quality. The system's failures are unannounced; therefore, it is inspected periodically to determine its status (functioning or failed). The system continues in operation until it is either preventively or correctively maintained. The available maintenance options include perfect repair, which restores the system to an as‐good‐as‐new condition, and replacement with a randomly selected unit from the supply of heterogeneous spares. The objective is to minimize the total expected discounted maintenance costs over an infinite time horizon. We formulate the problem using a mixed observability Markov decision process (MOMDP) model in which the system's age is observable but its quality must be inferred. We show, under suitable conditions, the monotonicity of the optimal value function in the belief about the system quality and establish conditions under which finite preventive maintenance thresholds exist. A detailed computational study reveals that the optimal policy encourages exploration when the system's quality is uncertain; the policy is more exploitive when the quality is highly certain. The study also demonstrates that substantial cost savings are achieved by utilizing our MOMDP‐based method as compared to more naïve methods of accounting for heterogeneous spares.

In reliability engineering, the concept of minimal repair describes that the repair brings the failed unit (eg, system or component) to the situation which is same as it was just before the failure. With the help of the well‐known Gamma‐Poisson relationship, this paper investigates optimal allocation strategies of minimal repairs for parallel and series systems through implementing stochastic comparisons of various allocation policies in terms of the hazard rate, the reversed hazard rate, and the likelihood ratio orderings. Numerical examples are presented to illustrate these findings as well. These results not only strengthen and generalize some known ones in the seminal work of Shaked and Shanthikumar, but also solve the open problems proposed by Chahkandi et al.'s study and Arriaza et al.'s study.

We consider the multitasking scheduling problem on unrelated parallel machines to minimize the total weighted completion time. In this problem, each machine processes a set of jobs, while the processing of a selected job on a machine may be interrupted by other available jobs scheduled on the same machine but unfinished. To solve this problem, we propose an exact branch‐and‐price algorithm, where the master problem at each search node is solved by a novel column generation scheme, called in‐out column generation, to maintain the stability of the dual variables. We use a greedy heuristic to obtain a set of initial columns to start the in‐out column generation, and a hybrid strategy combining a genetic algorithm and an exact dynamic programming algorithm to solve the pricing subproblems approximately and exactly, respectively. Using randomly generated data, we conduct numerical studies to evaluate the performance of the proposed solution approach. We also examine the effects of multitasking on the scheduling outcomes, with which the decision maker can justify making investments to adopt or avoid multitasking.

We investigate operations impacts of consumer‐initiated group buying (CGB), whereby consumers voluntarily form buying groups to negotiate bulk deals with retailers. This differs from regular purchasing whereby consumers visit retailers individually and pay posted prices. Upon the visit by group consumers, a retailer decides to forgo or satisfy their demand in its entirety. Turned down by a retailer, group consumers continue to visit other retailers. In the case where their group effort fails to conclude a deal, some group consumers switch to individual purchasing provided they receive a non‐negative utility by doing so. Even after a successful group event, the group consumers who forgo the event out of utility concern may switch to individual purchasing as well. Retailer competition, group size, and the chance that group consumers switch to individual purchasing upon unsatisfaction are crucial to how retailers adjust operations to deal with CGB. With retailer competition, the rise of CGB results in every consumer paying the same reduced price when group size is small but makes group consumers pay more than by purchasing individually when group size is large. This has mixed consequences on the profits for retailers in both absolute and relative terms.

Existing models in multistage service systems assume full information on the state of downstream stages. In this paper, we investigate how much the lack of such information impacts jobs' waiting time in a two‐stage system with two types of jobs at the first stage. The goal is to find the optimal control policy for the server at the first stage to switch between type‐1 and type‐2 jobs, while minimizing the long‐run average number of jobs in the system. We identify control policies and corresponding conditions under which having no or partial information, the system can still capture the most benefit of having full information.

In financial engineering, sensitivities of derivative prices (also known as the Greeks) are important quantities in risk management, and stochastic gradient estimation methods are used to estimate them given the market parameters. In practice, the surface (function) of the Greeks with respect to the underlying parameters is much more desired, because it can be used in real‐time risk management. In this paper, we consider derivatives with multiple underlying assets, and propose three stochastic kriging‐based methods, the element‐by‐element, the importance mapping, and the Cholesky decomposition, to fit the surface of the gamma matrix that can fulfill the time constraint and the precision requirement in real‐time risk management. Numerical experiments are provided to illustrate the effectiveness of the proposed methods.

Gamma accelerated degradation tests (ADT) are widely used to assess timely lifetime information of highly reliable products with degradation paths that follow a gamma process. In the existing literature, there is interest in addressing the problem of deciding how to conduct an efficient, ADT that includes determinations of higher stress‐testing levels and their corresponding sample‐size allocations. The existing results mainly focused on the case of a single accelerating variable. However, this may not be practical when the quality characteristics of the product have slow degradation rates. To overcome this difficulty, we propose an analytical approach to address this decision‐making problem using the case of two accelerating variables. Specifically, based on the criterion of minimizing the asymptotic variance of the estimated q quantile of lifetime distribution of the product, we analytically show that the optimal stress levels and sample‐size allocations can be simultaneously obtained via a general equivalence theorem. In addition, we use a practical example to illustrate the proposed procedure.

Information technology (IT) infrastructure relies on a globalized supply chain that is vulnerable to numerous risks from adversarial attacks. It is important to protect IT infrastructure from these dynamic, persistent risks by delaying adversarial exploits. In this paper, we propose max‐min interdiction models for critical infrastructure protection that prioritizes cost‐effective security mitigations to maximally delay adversarial attacks. We consider attacks originating from multiple adversaries, each of which aims to find a “critical path” through the attack surface to complete the corresponding attack as soon as possible. Decision‐makers can deploy mitigations to delay attack exploits, however, mitigation effectiveness is sometimes uncertain. We propose a stochastic model variant to address this uncertainty by incorporating random delay times. The proposed models can be reformulated as a nested max‐max problem using dualization. We propose a Lagrangian heuristic approach that decomposes the max‐max problem into a number of smaller subproblems, and updates upper and lower bounds to the original problem via subgradient optimization. We evaluate the perfect information solution value as an alternative method for updating the upper bound. Computational results demonstrate that the Lagrangian heuristic identifies near‐optimal solutions efficiently, which outperforms a general purpose mixed‐integer programming solver on medium and large instances.

We consider the classical problem of whether certain classes of lifetime distributions are preserved under the formation of coherent systems. Under the assumption of independent and identically distributed (i.i.d.) component lifetimes, we consider the NBUE (new better than used in expectation) and NWUE (new worse than used in expectation) classes. First, a necessary condition for a coherent system to preserve the NBUE class is given. Sufficient conditions are then obtained for systems satisfying this necessary condition. The sufficient conditions are satisfied for a collection of systems which includes all parallel systems, but the collection is shown to be strictly larger. We also prove that no coherent system preserves the NWUE class. As byproducts of our study, we obtain the following results for the case of i.i.d. component lifetimes: (a) the DFR (decreasing failure rate) class is preserved by no coherent systems other than series systems, and (b) the IMRL (increasing mean residual life) class is not preserved by any coherent systems. Generalizations to the case of dependent component lifetimes are briefly discussed.