The focus of this work is to analyze learning in single-machine scheduling problems. It is surprising that the well-known learning effect has never been considered in connection with scheduling problems. It is shown in this paper that even with the introduction of learning to job processing times two important types of single-machine problems remain polynomially solvable.
An original methodology for using rough sets to preference modeling in multi-criteria decision problems is presented. This methodology operates on a pairwise comparison table (PCT), including pairs of actions described by graded preference relations on particular criteria and by a comprehensive preference relation. It builds up a rough approximation of a preference relation by graded dominance relations. Decision rules derived from the rough approximation of a preference relation can be used to obtain a recommendation in multi-criteria choice and ranking problems. The methodology is illustrated by an example of multi-criteria programming of water supply systems.
This paper proves the basic theory of the triangular fuzzy number and improves the formulation of comparing the triangular fuzzy number's size. On this basis, a practical example on petroleum prospecting is introduced.
In this paper, we present a general framework for understanding the role of artificial neural networks (ANNs) in bankruptcy prediction. We give a comprehensive review of neural network applications in this area and illustrate the link between neural networks and traditional Bayesian classification theory. The method of cross-validation is used to examine the between-sample variation of neural networks for bankruptcy prediction. Based on a matched sample of 220 firms, our findings indicate that neural networks are significantly better than logistic regression models in prediction as well as classification rate estimation. In addition, neural networks are robust to sampling variations in overall classification performance.
In this paper we use multi-output distance functions to investigate technical inefficiency in European railways. The principle aim of the paper is to compare the results obtained from the three alternative methods of estimating multi-output distance functions. Namely, the construction of a parametric frontier using linear programming; data envelopment analysis (DEA) and corrected ordinary least squares (COLS). Input-orientated, output-orientated and constant returns to scale (CRS) distance functions are estimated and compared. The results indicate a strong degree of correlation between the input- and output-orientated results for each of the three methods. There are also significant correlations observed between the results obtained using the alternative estimation methods. The strongest correlations being between the parametric linear programming and the COLS methods. Finally, the paper concludes with the suggestion that a combination of the technical efficiency scores, obtained from the three different methods, be used as the preferred set of scores. This idea is borrowed from the time-series forecasting literature.
A large number of methods like discriminant analysis, logit analysis, recursive partitioning algorithm, etc., have been used in the past for the prediction of business failure. Although some of these methods lead to models with a satisfactory ability to discriminate between healthy and bankrupt firms, they suffer from some limitations, often due to the unrealistic assumption of statistical hypotheses or due to a confusing language of communication with the decision makers. This is why we have undertaken a research aiming at weakening these limitations. In this paper, the rough set approach is used to provide a set of rules able to discriminate between healthy and failing firms in order to predict business failure. Financial characteristics of a large sample of 80 Greek firms are used to derive a set of rules and to evaluate its prediction ability. The results are very encouraging, compared with those of discriminant and logit analyses, and prove the usefulness of the proposed method for business failure prediction. The rough set approach discovers relevant subsets of financial characteristics and represents in these terms all important relationships between the image of a firm and its risk of failure. The method analyses only facts hidden in the input data and communicates with the decision maker in the natural language of rules derived from his/her experience.
This paper proposes an integrated approach to determine attribute weights in the multiple attribute decision making (MADM) problems. The approach makes use of the subjective information provided by a decision maker and the objective information to form a two-objective programming model. Thus the resultant attribute weights and rankings of alternatives reflect both the subjective considerations of a decision maker (DM) and the objective information. An example is used to illustrate the applicability of the proposed approach.
Two methods are frequently used for modeling the choice among uncertain outcomes: stochastic dominance and mean-risk approaches. The former is based on an axiomatic model of risk-averse preferences but does not provide a convenient computational recipe. The latter quantifies the problem in a lucid form of two criteria with possible trade-off analysis, but cannot model all risk-averse preferences. In particular, if variance is used as a measure of risk, the resulting mean-variance (Markowitz) model is, in general, not consistent with stochastic dominance rules. This paper shows that the standard semideviation (square root of the semivariance) as the risk measure makes the mean-risk model consistent with the second degree stochastic dominance, provided that the trade-off coefficient is bounded by a certain constant. Similar results are obtained for the absolute semideviation, and for the absolute and standard deviations in the case of symmetric or bounded distributions. In the analysis we use a new tool, the Outcome-Risk (O-R) diagram, which appears to be particularly useful for comparing uncertain outcomes.
This paper develops a consistent bootstrap estimation procedure for obtaining confidence intervals for Malmquist indices of productivity and their decompositions. Although the exposition is in terms of input-oriented indices, the techniques can be trivially extended to the output orientation. The bootstrap methodology is an extension of earlier work described in Simar and Wilson (Simar, L., Wilson, P.W., 1998, Management Science). Some empirical examples are also given, using data on Swedish pharmacies.
This paper presents an interactive procedure for solving a multiple attribute group decision making (MAGDM) problem with incomplete information. The main properties of the procedure are: (1) Each decision maker is asked to express his/her preference in relation to an additive value model with incomplete preference statements. (2) A range-typed representation method for utility is used. The range-typed utility representation makes it easy to compare each group member's utility information with a group's one and to aggregate each group member's utility information into a group's one. Utility range is calculated from each group member's incomplete information. (3) An interactive procedure is provided to help the group reach a consensus. It helps each group member to modify or complete his/her utility with ease compared to group's utility range. (4) We formally describe theoretic models for establishing group's pairwise dominance relations with group's utility range by using a separable linear programming technique.
In this paper, we propose a new method for evaluating weapon systems by analytical hierarchy process (AHP) based on linguistic variable weight. Many researchers used fuzzy arithmetic operations with weight and attribute of computing performance score to solve fuzzy MADM problems, such as S.M. Chen [Evaluating weapon systems using fuzzy arithmetic operations, Fuzzy sets and systems 77 (1996) 265-276]. Chen used a large number of fuzzy arithmetic operations, which not only causes information (data) loss or more fuzziness, but also adds to the difficulty and accuracy of decision-making. Therefore, we use linguistic variable weight method to solve the above problems, and to avoid the mistake of decision-making. Our method possesses intuition, in accord with human rethinking-model, and is close to humanized uncertainty of language expression. We use many experts' viewpoints to build the membership function in order to calculate the performance score, and identify expectation of the decision-maker to avoid the constraint of system alternatives and subjective judgements of the decision-maker. In this paper, we also use linguistic variable method to revise the method of C.H. Cheng [in: Proceedings of the Fourth National Conference on Defense Management, pp. 1113-1124] and R.R. Yager [Fuzzy Sets and Systems 1 (1978) 87-95]. Firstly, we measure the importance of relative weight, and use it to determine the centralization or dilation power of linguistic hedge. When the power of linguistic hedge is determined, we can use it to calculate the results of decision-making. Finally, we construct a practical example for evaluating attack helicopters to illustrate our proposed method.
This paper deals with multiple criteria decision making problem with incomplete information when multiple decision makers (Multiple Criteria Group Decision Making: MCGDM) are involved. Usually decision makers (DMs) are willing or able to provide only incomplete information, because of time pressure, lack of knowledge or data, and their limited expertise related with problem domain. There have been just a few studies considering incomplete information in group settings. This incompletely specified information constructs region of linear constraints and therefore, pairwise dominance relationship between alternatives reduces to intractable nonlinear programmings. Hence, to handle this difficulty, we suggest a method, utilizing individual decision results to form group consensus. Final group consensus ranking toward more agreement of participants can be built through solving a series of linear programmings, using individual decision results under group members' possibly different weight constraints.
In this paper, we propose a method to modify a given comparison matrix, by which the consistency ratio (CR) value of the modified matrix is less than that of the original one, and give an algorithm to derive a positive reciprocal matrix with acceptable consistency (i.e., CR < 0.1), then the convergence theorem for the given algorithm is established and its practicality is shown by some examples.
In many distribution systems, the location of the distribution facilities and the routing of the vehicles from these facilities are interdependent. Although this interdependence has been recognized by academics and practitioners alike, attempts to integrate these two decisions have been limited. The location routing problem (LRP), which combines the facility location and the vehicle routing decisions, is NP-hard. Due to the problem complexity, simultaneous solution methods are limited to heuristics. This paper presents a two-phase tabu search architecture for the solution of the LRP. First introduced in this paper, the two-phase approach offers a computationally efficient strategy that integrates facility location and routing decisions. This two-phase architecture makes it possible to search the solution space efficiently, thus producing good solutions without excessive computation. An extensive computational study shows that the TS algorithm achieves significant improvement over a recent effective LRP heuristic.
This paper presents a model for designing the pricing and return-credit strategy for a monopolistic manufacturer of single-period commodities. That is, given the unit manufacturing cost and the unit retail sale price, the manufacturer determines: (i) the unit price C to be charged against the retailer; and (ii) the unit credit V to be given to the retailer for units returned. While the manufacturer is allowed to set C and V, the order quantity Q is set by the retailer in response to the manufacturer's C and V. Among the unexpected findings derived from our model are: (i) unless an external force supports the retailer, otherwise the manufacturer can usually design a (C, V)-scheme that gives himself the lion's share of the profit; (ii) depending on the risk attitudes of the manufacturer and the retailer, the optimal return policy can range from 'no returns allowed' to 'unlimited returns with full credit'; (iii) instead of losing his profit share to the retailer, a return-credits agreement can often be manipulated by a shrewd manufacturer to increase his profit.
In this paper, we present a Multiple Criteria Data Envelopment Analysis (MCDEA) model which can be used to improve discriminating power of DEA methods and also effectively yield more reasonable input and output weights without a priori information about the weights. In the proposed model, several different efficiency measures, including classical DEA efficiency, are defined under the same constraints. Each measure serves as a criterion to be optimized. Efficiencies are then evaluated under the framework of multiple objective linear programming (MOLP). The method is illustrated through three examples in which data sets are taken from previous research on DEA's discriminating power and weight restriction.
This paper presents a novel fuzzy multiple criteria decision making (MCDM) based on the concepts of ideal and anti-ideal points. The concepts of fuzzy set theory and hierarchical structure analysis are used to develop a weighted suitability decision matrix to evaluate the weighted suitability of different alternatives versus various criteria. The distance of different alternatives versus positive ideal solution and negative ideal solution are then obtained by using the proposed ranking method. Finally, the relative approximation values of various alternatives versus positive ideal solution are ranked to determine the best alternative.
The Traveling Salesman Problem (TSP) is one of the most famous problems in combinatorial optimization. In this paper, we are going to examine how the techniques of Guided Local Search (GLS) and Fast Local Search (FLS) can be applied to the problem. GLS sits on top of local search heuristics and has as a main aim to guide these procedures in exploring efficiently and effectively the vast search spaces of combinatorial optimization problems. GLS can be combined with the neighborhood reduction scheme of FLS which significantly speeds up the operations of the algorithm. The combination of GLS and FLS with TSP local search heuristics of different eiciency and effectiveness is studied in an effort to determine the dependence of GLS on the underlying local search heuristic used. Comparisons are made with some of the best TSP heuristic algorithms and general optimization techniques which demonstrate the advantages of GLS over alternative heuristic approaches suggested for the problem.
A new Global Efficiency Measures (GEM) based on the Russell Graph Measure of Technical Efficiency is proposed. Called the Enhanced Russell Measure, it represents a solution for the problem of nonzero slacks when measuring efficiency by means of Data Envelopment Analysis (DEA) models. The new measure is well defined, straightforward, and represents the ratio between average efficiency in inputs and in outputs.