We introduce multiplicative drift analysis as a suitable way to analyze the runtime of randomized search heuristics such as evolutionary algorithms. Our multiplicative version of the classical drift theorem allows easier analyses in the often encountered situation that the optimization progress is roughly proportional to the current distance to the optimum.To display the strength of this tool, we regard the classical problem of how the (1+1) Evolutionary Algorithm optimizes an arbitrary linear pseudo-Boolean function. Here, we first give a relatively simple proof for the fact that any linear function is optimized in expected time O(nlogn), where n is the length of the bit string. Afterwards, we show that in fact any such function is optimized in expected time at most (1+o(1))1.39enlnn, again using multiplicative drift analysis. We also prove a corresponding lower bound of (1−o(1))enlnn which actually holds for all functions with a unique global optimum.We further demonstrate how our drift theorem immediately gives natural proofs (with better constants) for the best known runtime bounds for the (1+1) Evolutionary Algorithm on combinatorial problems like finding minimum spanning trees, shortest paths, or Euler tours in graphs.

The complexity theory for black-box algorithms, introduced by Droste, Jansen, and Wegener (Theory Comput. Syst. 39:525–544, 2006), describes common limits on the efficiency of a broad class of randomised search heuristics. There is an obvious trade-off between the generality of the black-box model and the strength of the bounds that can be proven in such a model. In particular, the original black-box model provides for well-known benchmark problems relatively small lower bounds, which seem unrealistic in certain cases and are typically not met by popular search heuristics.In this paper, we introduce a more restricted black-box model for optimisation of pseudo-Boolean functions which we claim captures the working principles of many randomised search heuristics including simulated annealing, evolutionary algorithms, randomised local search, and others. The key concept worked out is an unbiased variation operator. Considering this class of algorithms, significantly better lower bounds on the black-box complexity are proved, amongst them an Ω(nlogn) bound for functions with unique optimum. Moreover, a simple unimodal function and plateau functions are considered. We show that a simple (1+1) EA is able to match the runtime bounds in several cases.

The $$(1+(\lambda ,\lambda ))$$ (1+(λ,λ)) genetic algorithm proposed in Doerr et al. (Theor Comput Sci 567:87–104, 2015) is one of the few examples for which a super-constant speed-up of the expected optimization time through the use of crossover could be rigorously demonstrated. It was proven that the expected optimization time of this algorithm on OneMax is $$O(\max \{n \log (n) / \lambda , \lambda n\})$$ O(max{nlog(n)/λ,λn}) for any offspring population size $$\lambda \in \{1,\ldots ,n\}$$ λ∈{1,…,n} (and the other parameters suitably dependent on $$\lambda $$ λ ) and it was shown experimentally that a self-adjusting choice of $$\lambda $$ λ leads to a better, most likely linear, runtime. In this work, we study more precisely how the optimization time depends on the parameter choices, leading to the following results on how to optimally choose the population size, the mutation probability, and the crossover bias both in a static and a dynamic fashion. For the mutation probability and the crossover bias depending on $$\lambda $$ λ as in Doerr et al. (2015), we improve the previous runtime bound to $$O(\max \{n \log (n) / \lambda , n \lambda \log \log (\lambda )/\log (\lambda )\})$$ O(max{nlog(n)/λ,nλloglog(λ)/log(λ)}) . This expression is minimized by a value of $$\lambda $$ λ slightly larger than what the previous result suggested and gives an expected optimization time of $$O\left( n \sqrt{\log (n) \log \log \log (n) / \log \log (n)}\right) $$ Onlog(n)logloglog(n)/loglog(n) . We show that no static choice in the three-dimensional parameter space of offspring population, mutation probability, and crossover bias gives an asymptotically better runtime. We also prove that the self-adjusting parameter choice suggested in Doerr et al. (2015) outperforms all static choices and yields the conjectured linear expected runtime. This is asymptotically optimal among all possible parameter choices.

Possibly the most famous algorithmic meta-theorem is Courcelle’s theorem, which states that all MSO-expressible graph properties are decidable in linear time for graphs of bounded treewidth. Unfortunately, the running time’s dependence on the formula describing the problem is in general a tower of exponentials of unbounded height, and there exist lower bounds proving that this cannot be improved even if we restrict ourselves to deciding FO logic on trees.We investigate whether this parameter dependence can be improved by focusing on two proper subclasses of the class of bounded treewidth graphs: graphs of bounded vertex cover and graphs of bounded max-leaf number. We prove stronger algorithmic meta-theorems for these more restricted classes of graphs. More specifically, we show it is possible to decide any FO property in both of these classes with a singly exponential parameter dependence and that it is possible to decide MSO logic on graphs of bounded vertex cover with a doubly exponential parameter dependence. We also prove lower bound results which show that our upper bounds cannot be improved significantly, under widely believed complexity assumptions. Our work addresses an open problem posed by Michael Fellows.

An ortho-polygon visibility representation of an n-vertex embedded graph G (OPVR of G) is an embedding-preserving drawing of G that maps every vertex to a distinct orthogonal polygon and each edge to a vertical or horizontal visibility between its end-vertices. The vertex complexity of an OPVR of G is the minimum k such that every polygon has at most k reflex corners. We present polynomial time algorithms that test whether G has an OPVR and, if so, compute one of minimum vertex complexity. We argue that the existence and the vertex complexity of an OPVR of G are related to its number of crossings per edge and to its connectivity. More precisely, we prove that if G has at most one crossing per edge (i.e., G is a 1-plane graph), an OPVR of G always exists while this may not be the case if two crossings per edge are allowed. Also, if G is a 3-connected 1-plane graph, we can compute an OPVR of G whose vertex complexity is bounded by a constant in O(n) time. However, if G is a 2-connected 1-plane graph, the vertex complexity of any OPVR of G may be $$\varOmega (n)$$ Ω(n) . In contrast, we describe a family of 2-connected 1-plane graphs for which an embedding that guarantees constant vertex complexity can be computed in O(n) time. Finally, we present the results of an experimental study on the vertex complexity of ortho-polygon visibility representations of 1-plane graphs.

The parameterized node multiway cut problem is for a given graph to find a separator of size bounded by k whose removal separates a collection of terminal sets in the graph. In this paper, we develop an O(k4 k n 3) time algorithm for this problem, significantly improving the previous algorithm of time $O(4^{k^{3}}n^{5})$ for the problem. Our result gives the first polynomial time algorithm for the minimum node multiway cut problem when the separator size is bounded by O(log n).

Hyperbolicity is a distance-based measure of how close a given graph is to being a tree. Due to its relevance in modeling real-world networks, hyperbolicity has seen intensive research over the last years. Unfortunately, the best known algorithms used in practice for computing the hyperbolicity number of an n-vertex graph have running time $$O(n^4)$$ O ( n 4 ) . Exploiting the framework of parameterized complexity analysis, we explore possibilities for “linear-time FPT” algorithms to compute hyperbolicity. For example, we show that hyperbolicity can be computed in $$2^{O(k)} + O(n +m)$$ 2 O ( k ) + O ( n + m ) time (where m and k denote the number of edges and the size of a vertex cover in the input graph, respectively) while at the same time, unless the Strong Exponential Time Hypothesis (SETH) fails, there is no $$2^{o(k)}\cdot n^{2-\varepsilon }$$ 2 o ( k ) · n 2 - ε -time algorithm for every $$\varepsilon >0$$ ε > 0 .

A graph is outer 1-planar (o1p) if it can be drawn in the plane such that all vertices are in the outer face and each edge is crossed at most once. o1p graphs generalize outerplanar graphs, which can be recognized in linear time, and specialize 1-planar graphs, whose recognition is $${NP}$$ N P -hard. We explore o1p graphs. Our first main result is a linear-time algorithm that takes a graph as input and returns a positive or a negative witness for o1p. If a graph $$G$$ G is o1p, then the algorithm computes an embedding and can augment $$G$$ G to a maximal o1p graph. Otherwise, $$G$$ G includes one of six minors, which is detected by the recognition algorithm. Secondly, we establish structural properties of o1p graphs. o1p graphs are planar and are subgraphs of planar graphs with a Hamiltonian cycle. They are neither closed under edge contraction nor under subdivision. Several important graph parameters, such as treewidth, colorability, stack number, and queue number, increase by one from outerplanar to o1p graphs. Every o1p graph of size $$n$$ n has at most $$\frac{5}{2} n - 4$$ 5 2 n - 4 edges and there are maximal o1p graphs with $$\frac{11}{5} n - \frac{18}{5}$$ 11 5 n - 18 5 edges, and these bounds are tight. Finally, every o1p graph has a straight-line grid drawing in $$\fancyscript{O}(n^2)$$ O ( n 2 ) area with all vertices in the outer face, a planar visibility representation in $$\fancyscript{O}(n \log n)$$ O ( n log n ) area, and a 3D straight-line drawing in linear volume, and these drawings can be constructed in linear time.

Suppose a client stores $$n$$ n elements in a hash table that is outsourced to an untrusted server. We address the problem of authenticating the hash table operations, where the goal is to design protocols capable of verifying the correctness of queries and updates performed by the server, thus ensuring the integrity of the remotely stored data across its entire update history. Solutions to this authentication problem allow the client to gain trust in the operations performed by a faulty or even malicious server that lies outside the administrative control of the client. We present two novel schemes that implement an authenticated hash table. An authenticated hash table exports the basic hash-table functionality for maintaining a dynamic set of elements, coupled with the ability to provide short cryptographic proofs that a given element is a member or not of the current set. By employing efficient algorithmic constructs and cryptographic accumulators as the core security primitive, our schemes provide constant proof size, constant verification time and sublinear query or update time, strictly improving upon previous approaches. Specifically, in our first scheme which is based on the RSA accumulator, the server is able to construct a (non-)membership proof in constant time and perform updates in $$O\left( n^{\epsilon }\log n\right) $$ O n ϵ log n time for any fixed constant $$0<\epsilon <1$$ 0 < ϵ < 1 . A variation of this scheme achieves a different trade-off, offering constant update time and $$O(n^{\epsilon })$$ O ( n ϵ ) query time. Our second scheme uses an accumulator based on bilinear pairings to achieve $$O(n^{\epsilon })$$ O ( n ϵ ) update time at the server while keeping all other complexities constant. A variation of this scheme achieves $$O(n^{\epsilon }\log n)$$ O ( n ϵ log n ) time for queries and constant update time. An experimental evaluation of both solutions shows their practicality.

Let $$B=(X,Y,E)$$ B = ( X , Y , E ) be a bipartite graph. A half-square of B has one color class of B as vertex set, say X; two vertices are adjacent whenever they have a common neighbor in Y. Every planar graph is a half-square of a planar bipartite graph, namely of its subdivision. Until recently, only half-squares of planar bipartite graphs, also known as map graphs (Chen et al., in: Proceedings of the thirtieth annual ACM symposium on the theory of computing, STOC ’98, pp 514–523. https://doi.org/10.1145/276698.276865 , 1998; J ACM 49(2):127–138. https://doi.org/10.1145/506147.506148 , 2002), have been investigated, and the most discussed problem is whether it is possible to recognize these graphs faster and simpler than Thorup’s $$O(n^{120})$$ O ( n 120 ) -time algorithm (Thorup, in: Proceedings of the 39th IEEE symposium on foundations of computer science (FOCS), pp 396–405. https://doi.org/10.1109/SFCS.1998.743490 , 1998). In this paper, we identify the first hardness case, namely that deciding if a graph is a half-square of a balanced bisplit graph is NP-complete. (Balanced bisplit graphs form a proper subclass of star convex bipartite graphs). For classical subclasses of tree convex bipartite graphs such as biconvex, convex, and chordal bipartite graphs, we give good structural characterizations of their half-squares that imply efficient recognition algorithms. As a by-product, we obtain new characterizations of unit interval graphs, interval graphs, and of strongly chordal graphs in terms of half-squares of biconvex bipartite, convex bipartite, and of chordal bipartite graphs, respectively. Good characterizations of half-squares of star convex and star biconvex bipartite graphs are also given, giving linear-time recognition algorithms for these half-squares.

The NP-complete problem Feedback Vertex Set is that of deciding whether or not it is possible, for a given integer k0, to delete at mostk vertices from a given graph so that what remains is a forest. The variant in which the deleted vertices must form an independent set is called Independent Feedback Vertex Set and is also NP-complete. In fact, even deciding if an independent feedback vertex set exists is NP-complete and this problem is closely related to the 3-Colouring problem, or equivalently, to the problem of deciding whether or not a graph has an independent odd cycle transversal, that is, an independent set of vertices whose deletion makes the graph bipartite. We initiate a systematic study of the complexity of Independent Feedback Vertex Set for H-free graphs. We prove that it is NP-complete ifH contains a claw or cycle. Tamura, Ito and Zhou proved that it is polynomial-time solvable for P4-free graphs. We show that it remains polynomial-time solvable for P5-free graphs. We prove analogous results for the Independent Odd Cycle Transversal problem, which asks whether or not a graph has an independent odd cycle transversal of size at mostk for a given integer k0. Finally, in line with our underlying research aim, we compare the complexity of Independent Feedback Vertex Set for H-free graphs with the complexity of 3-Colouring, Independent Odd Cycle Transversal and other related problems.

A graph with n vertices is 1-planar if it can be drawn in the plane such that each edge is crossed at most once, and is optimal if it has the maximum of $$4n-8$$ 4 n - 8 edges. We show that optimal 1-planar graphs can be recognized in linear time. Our algorithm implements a graph reduction system with two rules, which can be used to reduce every optimal 1-planar graph to an irreducible extended wheel graph. The graph reduction system is non-deterministic, constraint, and non-confluent.

Given an edge-weighted graph G with a set $$Q$$ Q of k terminals, a mimicking network is a graph with the same set of terminals that exactly preserves the size of minimum cut between any partition of the terminals. A natural question in the area of graph compression is to provide as small mimicking networks as possible for input graph G being either an arbitrary graph or coming from a specific graph class. We show an exponential lower bound for cut mimicking networks in planar graphs: there are edge-weighted planar graphs with k terminals that require $$2^{k-2}$$ 2 k - 2 edges in any mimicking network. This nearly matches an upper bound of $$\mathcal {O}(k 2^{2k})$$ O ( k 2 2 k ) of Krauthgamer and Rika (in: Khanna (ed) Proceedings of the twenty-fourth annual ACM-SIAM symposium on discrete algorithms, SODA 2013, New Orleans, 2013) and is in sharp contrast with the upper bounds of $$\mathcal {O}(k^2)$$ O ( k 2 ) and $$\mathcal {O}(k^4)$$ O ( k 4 ) under the assumption that all terminals lie on a single face (Goranci et al., in: Pruhs and Sohler (eds) 25th Annual European symposium on algorithms (ESA 2017), 2017, arXiv:1702.01136 ; Krauthgamer and Rika in Refined vertex sparsifiers of planar graphs, 2017, arXiv:1702.05951 ). As a side result we show a tight example for double-exponential upper bounds given by Hagerup et al. (J Comput Syst Sci 57(3):366–375, 1998), Khan and Raghavendra (Inf Process Lett 114(7):365–371, 2014), and Chambers and Eppstein (J Gr Algorithms Appl 17(3):201–220, 2013).

We study the Maximum Cardinality Matching (MCM) and the Maximum Weight Matching (MWM) problems, on trees and on some special classes of graphs, in the online preemptive and the incremental graph models. In the Online Preemptive model, the edges of a graph are revealed one by one and the algorithm is required to always maintain a valid matching. On seeing an edge, the algorithm has to either accept or reject the edge. If accepted, then the adjacent edges are discarded, and all rejections are permanent. In this model, the complexity of the problems is settled for deterministic algorithms (McGregor, in: Proceedings of the 8th international workshop on approximation, randomization and combinatorial optimization problems, and proceedings of the 9th international conference on randomization and computation: algorithms and techniques, APPROX’05/RANDOM’05, Springer, Berlin, pp. 170–181, 2005; Varadaraja, in: Automata, languages and programming: 38th international colloquium, ICALP 2011, Zurich, Switzerland, proceedings, part I, pp. 379–390, 2011. https://doi.org/10.1007/978-3-642-22006-7_32 ). Epstein et al. (in: 30th international symposium on theoretical aspects of computer science, STACS 2013, Kiel, Germany, pp. 389–399, 2013. https://doi.org/10.4230/LIPIcs.STACS.2013.389 ) gave a 5.356-competitive randomized algorithm for MWM, and also proved a lower bound on the competitive ratio of $$(1+\ln 2) \approx 1.693$$ ( 1 + ln 2 ) ≈ 1.693 for MCM. The same lower bound applies for MWM. In the Incremental Graph model, at each step an edge is added to the graph, and the algorithm is supposed to quickly update its current matching. Gupta (in: 34th international conference on foundation of software technology and theoretical computer science, FSTTCS 2014, 15–17 Dec 2014, New Delhi, India, pp. 227–239, 2014. https://doi.org/10.4230/LIPIcs.FSTTCS.2014.227 ) proved that for any $$\epsilon \le 1/2$$ ϵ ≤ 1 / 2 , there exists an algorithm that maintains a $$(1+\epsilon )$$ ( 1 + ϵ ) -approximate MCM for an incremental bipartite graph in an amortized update time of $$O\left( \frac{\log ^2 n}{\epsilon ^4}\right) $$ O log 2 n ϵ 4 . No $$(2-\epsilon )$$ ( 2 - ϵ ) -approximation algorithm with a worst case update time of O(1) is known in this model, even for special classes of graphs. In this paper we show that some of the results can be improved for trees, and for some special classes of graphs. In the online preemptive model, we present a 64 / 33-competitive randomized algorithm (which uses only two bits of randomness) for MCM on trees. Inspired by the above mentioned algorithm for MCM, we present the main result of the paper, a randomized algorithm for MCM with a worst case update time of O(1), in the incremental graph model, which is 3 / 2-approximate (in expectation) on trees, and 1.8-approximate (in expectation) on general graphs with maximum degree 3. Note that this algorithm works only against an oblivious adversary. We derandomize this algorithm, and give a $$(3/2+\epsilon )$$ ( 3 / 2 + ϵ ) -approximate deterministic algorithm for MCM on trees, with an amortized update time of $$O(1/\epsilon )$$ O ( 1 / ϵ ) . We also present a minor result for MWM in the online preemptive model, a 3-competitive randomized algorithm (that uses only O(1) bits of randomness) on growing trees (where the input revealed upto any stage is always a tree, i.e. a new edge never connects two disconnected trees).

Given a vertex-weighted connected graph $$G = (V, E)$$ G = ( V , E ) , the maximum weight internal spanning tree (MwIST for short) problem asks for a spanning tree T of G such that the total weight of internal vertices in T is maximized. The unweighted variant, denoted as MIST, is NP-hard and APX-hard, and the currently best approximation algorithm has a proven performance ratio of 13 / 17. The currently best approximation algorithm for MwIST only has a performance ratio of $$1/3 - \epsilon $$ 1 / 3 - ϵ , for any $$\epsilon > 0$$ ϵ > 0 . In this paper, we present a simple algorithm based on a novel relationship between MwIST and maximum weight matching, and show that it achieves a significantly better approximation ratio of 1/2. When restricted to claw-free graphs, a special case previously studied, we design a 7/12-approximation algorithm.

In a recent breakthrough paper Braverman et al. (in: STOC’13, pp 151–160, 2013) developed a local characterization for the zero-error information complexity in the two-party model, and used it to compute the exact internal and external information complexity of the 2-bit AND function. In this article, we extend their results on AND function to the multi-party number-in-hand model by proving that the generalization of their protocol has optimal internal and external information cost for certain natural distributions. Our proof has new components, and in particular, it fixes a minor gap in the proof of Braverman et al.

We consider a two-way trading problem, where investors buy and sell a stock whose price moves within a certain range. Naturally they want to maximize their profit. Investors can perform up to k trades, where each trade must involve the full amount. We give optimal algorithms for three different models which differ in the knowledge of how the price fluctuates. In the first model, there are global minimum and maximum bounds m and M. We first show an optimal lower bound of $$\varphi $$ φ (where $$\varphi =M/m$$ φ = M / m ) on the competitive ratio for one trade, which is the bound achieved by trivial algorithms. Perhaps surprisingly, when we consider more than one trade, we can give a better algorithm that loses only a factor of $$\varphi ^{2/3}$$ φ 2 / 3 (rather than $$\varphi $$ φ ) per additional trade. Specifically, for k trades the algorithm has competitive ratio $$\varphi ^{(2k+1)/3}$$ φ ( 2 k + 1 ) / 3 . Furthermore we show that this ratio is the best possible by giving a matching lower bound. In the second model, m and M are not known in advance, and just $$\varphi $$ φ is known. We show that this only costs us an extra factor of $$\varphi ^{1/3}$$ φ 1 / 3 , i.e., both upper and lower bounds become $$\varphi ^{(2k+2)/3}$$ φ ( 2 k + 2 ) / 3 . Finally, we consider the bounded daily return model where instead of a global limit, the fluctuation from one day to the next is bounded, and again we give optimal algorithms, and interestingly one of them resembles common trading strategies that involve stop loss limits.

Given a point s and a set of h pairwise disjoint polygonal obstacles with a total of n vertices in the plane, suppose a triangulation of the space outside the obstacles is given; we present an O(n+hlogh) time and O(n) space algorithm for building a data structure (called shortest path map) of size O(n) such that for any query point t, the length of an L1 shortest obstacle-avoiding path from s to t can be computed in O(logn) time and the actual path can be reported in additional time proportional to the number of edges of the path. The previously best algorithm computes such a shortest path map in O(nlogn) time and O(n) space. So our algorithm is faster when h is relatively small. Further, our techniques can be extended to obtain improved results for other related problems, e.g., computing the L1 geodesic Voronoi diagram for a set of point sites among the obstacles.