Decades ago, simulation was famously characterized as a “method of last resort,” to which analysts should turn only “when all else fails.” In those intervening decades, the technologies supporting simulation—computing hardware, simulation‐modeling paradigms, simulation software, design‐and‐analysis methods—have all advanced dramatically. We offer an updated view that simulation is now a very appealing option for modeling and analysis. When applied properly, simulation can provide fully as much insight, with as much precision as desired, as can exact analytical methods that are based on more restrictive assumptions. The fundamental advantage of simulation is that it can tolerate far less restrictive modeling assumptions, leading to an underlying model that is more reflective of reality and thus more valid, leading to better decisions. Published 2015 Wiley Periodicals, Inc. Naval Research Logistics 62: 293–303, 2015

The allocation of underwater sensors for tracking, localization, and surveillance purposes is a fundamental problem in anti‐submarine warfare. Inexpensive passive receivers have been heavily utilized in recent years; however, modern submarines are increasingly quiet and difficult to detect with receivers alone. Recently, the idea of deploying noncollocated sources and receivers has emerged as a promising alternative to purely passive sensor fields and to traditional sonar fields composed of collocated sources and receivers. Such a multistatic sonar network carries a number of advantages, but it also brings increased system complexity resulting from its unusual coverage patterns. In this work, we study the problem of optimally positioning active multistatic sonar sources for a point coverage application where all receivers and points of interest are fixed and stationary. Using a definite range sensor model, we formulate exact methods and approximation algorithms for this problem and compare these algorithms via computational experiments. We also examine the performance of these algorithms on a discrete approximation of a continuous area coverage problem and find that they offer a significant improvement over two types of random sensor deployment.

This paper examines three types of sensitivity analysis on a firm's responsive pricing and responsive production strategies under imperfect demand updating. Demand has a multiplicative form where the market size updates according to a bivariate normal model. First, we show that both responsive production and responsive pricing resemble the classical pricing newsvendor with posterior demand uncertainty in terms of the optimal performance and first‐stage decision. Second, we show that the performance of responsive production is sensitive to the first‐stage decision, but responsive pricing is insensitive. This suggests that a “posterior rationale” (ie, using the optimal production decision from the classical pricing newsvendor with expected posterior uncertainty) allows a simple and near‐optimal first‐stage production heuristic for responsive pricing. However, responsive production obtains higher expected profits than responsive pricing under certain conditions. This implies that the firm's ability to calculate the first‐stage decision correctly can help determine which responsive strategy to use. Lastly, we find that the firm's performance is not sensitive to the parameter uncertainty coming from the market size, total uncertainty level and information quality, but is sensitive to uncertainty originating from the procurement cost and price‐elasticity.

Environmentally friendly energy resources open a new opportunity to tackle the problem of energy security and climate change arising from wide use of fossil fuels. This paper focuses on optimizing the allocation of the energy generated by the renewable energy system to minimize the total electricity cost for sustainable manufacturing systems under time‐of‐use tariff by clipping the peak demand. A rolling horizon approach is adopted to handle the uncertainty caused by the weather change. A nonlinear mathematical programming model is established for each decision epoch based on the predicted energy generation and the probability distribution of power demand in the manufacturing plant. The objective function of the model is shown to be convex, Lipchitz‐continuous, and subdifferentiable. A generalized benders decomposition method based on the primal‐dual subgradient descent algorithm is proposed to solve the model. A series of numerical experiments is conducted to show the effectiveness of the solution approach and the significant benefits of using the renewable energy resources.

We investigate operations impacts of consumer‐initiated group buying (CGB), whereby consumers voluntarily form buying groups to negotiate bulk deals with retailers. This differs from regular purchasing whereby consumers visit retailers individually and pay posted prices. Upon the visit by group consumers, a retailer decides to forgo or satisfy their demand in its entirety. Turned down by a retailer, group consumers continue to visit other retailers. In the case where their group effort fails to conclude a deal, some group consumers switch to individual purchasing provided they receive a non‐negative utility by doing so. Even after a successful group event, the group consumers who forgo the event out of utility concern may switch to individual purchasing as well. Retailer competition, group size, and the chance that group consumers switch to individual purchasing upon unsatisfaction are crucial to how retailers adjust operations to deal with CGB. With retailer competition, the rise of CGB results in every consumer paying the same reduced price when group size is small but makes group consumers pay more than by purchasing individually when group size is large. This has mixed consequences on the profits for retailers in both absolute and relative terms.

We consider the multitasking scheduling problem on unrelated parallel machines to minimize the total weighted completion time. In this problem, each machine processes a set of jobs, while the processing of a selected job on a machine may be interrupted by other available jobs scheduled on the same machine but unfinished. To solve this problem, we propose an exact branch‐and‐price algorithm, where the master problem at each search node is solved by a novel column generation scheme, called in‐out column generation, to maintain the stability of the dual variables. We use a greedy heuristic to obtain a set of initial columns to start the in‐out column generation, and a hybrid strategy combining a genetic algorithm and an exact dynamic programming algorithm to solve the pricing subproblems approximately and exactly, respectively. Using randomly generated data, we conduct numerical studies to evaluate the performance of the proposed solution approach. We also examine the effects of multitasking on the scheduling outcomes, with which the decision maker can justify making investments to adopt or avoid multitasking.

This paper presents a model for designing a trade credit contract between a supplier and a retailer that would coordinate a supply chain in the presence of investment opportunity for the retailer. Specifically, we study a newsvendor model where the supplier offers a trade credit contract to the retailer who, by delaying the payment, can invest the accounts payable amount and earn returns. We find that when the channel partners have symmetric information about the retailer's investment return, a conditionally concessional trade credit (CTC) contract, which includes a wholesale price, an interest‐free period, and a minimum order requirement, can achieve channel coordination. We then extend the model to the information asymmetry setting in which the retailer's investment return is unobservable by the supplier. We show that, although the CTC contract cannot achieve the coordination in this setting, it can effectively improve channel efficiency as well as profitability for individual partners.

Gamma accelerated degradation tests (ADT) are widely used to assess timely lifetime information of highly reliable products with degradation paths that follow a gamma process. In the existing literature, there is interest in addressing the problem of deciding how to conduct an efficient, ADT that includes determinations of higher stress‐testing levels and their corresponding sample‐size allocations. The existing results mainly focused on the case of a single accelerating variable. However, this may not be practical when the quality characteristics of the product have slow degradation rates. To overcome this difficulty, we propose an analytical approach to address this decision‐making problem using the case of two accelerating variables. Specifically, based on the criterion of minimizing the asymptotic variance of the estimated q quantile of lifetime distribution of the product, we analytically show that the optimal stress levels and sample‐size allocations can be simultaneously obtained via a general equivalence theorem. In addition, we use a practical example to illustrate the proposed procedure.

Many manufacturers sell their products through retailers and share the revenue with those retailers. Given this phenomenon, we build a stylized model to investigate the role of revenue sharing schemes in supply chain coordination and product variety decisions. In our model, a monopolistic manufacturer serves two segments of consumers, which are distinguished by their willingness to pay for quality. In the scenario with exogenous revenue sharing ratios, when the potential gain from serving the low segment is substantial (e.g., the low‐segment consumers' willingness to pay is high enough or the low segment takes a large enough proportion of the market), the retailer is better off abandoning the revenue sharing scheme. Moreover, when the potential gain from serving the low (high) segment is substantial enough, the manufacturer finds it profitable to offer a single product. Furthermore, when revenue sharing ratios are endogenous, we divide our analysis into two cases, depending on the methods of cooperation. When revenue sharing ratios are negotiated at the very beginning, the decentralized supply chain causes further distortion. This suggests that the central premise of revenue sharing—the coordination of supply chains—may be undermined if supply chain parties meticulously bargain over it.

We consider the shortest path interdiction problem involving two agents, a leader and a follower, playing a Stackelberg game. The leader seeks to maximize the follower's minimum costs by interdicting certain arcs, thus increasing the travel time of those arcs. The follower may improve the network after the interdiction by lowering the costs of some arcs, subject to a cardinality budget restriction on arc improvements. The leader and the follower are both aware of all problem data, with the exception that the leader is unaware of the follower's improvement budget. The effectiveness of an interdiction action is given by the length of a shortest path after arc costs are adjusted by both the interdiction and improvement. We propose a multiobjective optimization model for this problem, with each objective corresponding to a different possible improvement budget value. We provide mathematical optimization techniques to generate a complete set of strategies that are Pareto‐optimal. Additionally, for the special case of series‐parallel graphs, we provide a dynamic‐programming algorithm for generating all Pareto‐optimal solutions.

In this article, we develop a novel role for the initial function v0 in the value iteration algorithm. In case the optimal policy of a countable state Markovian queueing control problem has a threshold or switching curve structure, we conjecture, that one can tune the choice of v0 to generate monotonic sequences of n-stage threshold or switching curve optimal policies. We will show this for three queueing control models, the M/M/1 queue with admission and with service control, and the two-competing queues model with quadratic holding cost. As a consequence, we obtain increasingly tighter upper and lower bounds. After a finite number of iterations, either the optimal threshold, or the optimal switching curve values in a finite number of states is available. This procedure can be used to increase numerical efficiency.

With rapid advances in sensing technology and data acquisition systems, high‐dimensional data appear in many settings. The high dimensionality presents a new challenge to the traditional tools in multivariate statistical process control, due to the “curse of dimensionality.” Various tests for mean vectors in high dimensional situations have been discussed recently; however, they have been rarely adapted to process monitoring. This paper develops a distribution‐free control chart based on interpoint distances for monitoring mean vectors in high‐dimensional settings. Other than the Euclidean distance, the family of Minkowski distance is used for discussion, which is a generalization of the former and other distances. The proposed approach is very general as it represents a class of distribution‐free control charts based on distances. Numerical results show that the proposed control chart is efficient in detecting mean shifts in both symmetric and heavy‐tailed distributions.

Wildfire managers use initial attack (IA) to control wildfires before they grow large and become difficult to suppress. Although the majority of wildfire incidents are contained by IA, the small percentage of fires that escape IA causes most of the damage. Therefore, planning a successful IA is very important. In this article, we study the vulnerability of IA in wildfire suppression using an attacker‐defender Stackelberg model. The attacker's objective is to coordinate the simultaneous ignition of fires at various points in a landscape to maximize the number of fires that cannot be contained by IA. The defender's objective is to optimally dispatch suppression resources from multiple fire stations located across the landscape to minimize the number of wildfires not contained by IA. We use a decomposition algorithm to solve the model and apply the model on a test case landscape. We also investigate the impact of delay in the response, the fire growth rate, the amount of suppression resources, and the locations of fire stations on the success of IA.

We consider an integrated usage and maintenance optimization problem for a k-out-of-n system pertaining to a moving asset. The k-out-of-n systems are commonly utilized in practice to increase availability, where n denotes the total number of parallel and identical units and k the number of units required to be active for a functional system. Moving assets such as aircraft, ships, and submarines are subject to different operating modes. Operating modes can dictate not only the number of system units that are needed to be active, but also where the moving asset physically is, and under which environmental conditions it operates. We use the intrinsic age concept to model the degradation process. The intrinsic age is analogous to an intrinsic clock which ticks on a different pace in different operating modes. In our problem setting, the number of active units, degradation rates of active and standby units, maintenance costs, and type of economic dependencies are functions of operating modes. In each operating mode, the decision maker should decide on the set of units to activate (usage decision) and the set of units to maintain (maintenance decision). Since the degradation rate differs for active and standby units, the units to be maintained depend on the units that have been activated, and vice versa. In order to minimize maintenance costs, usage and maintenance decisions should be jointly optimized. We formulate this problem as a Markov decision process and provide some structural properties of the optimal policy. Moreover, we assess the performance of usage policies that are commonly implemented for maritime systems. We show that the cost increase resulting from these policies is up to 27% for realistic settings. Our numerical experiments demonstrate the cases in which joint usage and maintenance optimization is more valuable.