A duality based 2-approximation algorithm for maximum agreement forest
We give a 2-approximation algorithm for the Maximum Agreement Forest problem on two rooted binary trees. This NP-hard problem has been studied extensively in the past two decades, since it can be used to compute the rooted Subtree Prune-and-Regraft (rSPR) distance between two phylogenetic trees. Our algorithm is combinatorial and its running time is quadratic in the input size. To prove the approximation guarantee, we construct a feasible dual solution for a novel exponential-size linear programming formulation. In addition, we show this linear program has a smaller integrality gap than previously known formulations, and we give an equivalent compact formulation, showing that it can be solved in polynomial time.
Semi-streaming algorithms for submodular matroid intersection
While the basic greedy algorithm gives a semi-streaming algorithm with an approximation guarantee of 2 for the matching problem, it was only recently that Paz and Schwartzman obtained an analogous result for weighted instances. Their approach is based on the versatile local ratio technique and also applies to generalizations such as weighted hypergraph matchings. However, the framework for the analysis fails for the related problem of weighted matroid intersection and as a result the approximation guarantee for weighted instances did not match the factor 2 achieved by the greedy algorithm for unweighted instances.Our main result closes this gap by developing a semi-streaming algorithm with an approximation guarantee of for matroid intersection, improving upon the previous best guarantee of . Our techniques also allow us to generalize recent results by Levin and Wajc on submodular maximization subject to matching constraints to that of matroid-intersection constraints. While our algorithm is an adaptation of the local ratio technique used in previous works, the analysis deviates significantly and relies on structural properties of matroid intersection, called kernels. Finally, we also conjecture that our algorithm gives a approximation for the intersection of matroids but prove that new tools are needed in the analysis as the structural properties we use fail for .
Fast Augmented Lagrangian Method in the convex regime with convergence guarantees for the iterates
This work aims to minimize a continuously differentiable convex function with Lipschitz continuous gradient under linear equality constraints. The proposed inertial algorithm results from the discretization of the second-order primal-dual dynamical system with asymptotically vanishing damping term addressed by Boţ and Nguyen (J. Differential Equations 303:369-406, 2021), and it is formulated in terms of the Augmented Lagrangian associated with the minimization problem. The general setting we consider for the inertial parameters covers the three classical rules by Nesterov, Chambolle-Dossal and Attouch-Cabot used in the literature to formulate fast gradient methods. For these rules, we obtain in the convex regime convergence rates of order for the primal-dual gap, the feasibility measure, and the objective function value. In addition, we prove that the generated sequence of primal-dual iterates converges to a primal-dual solution in a general setting that covers the two latter rules. This is the first result which provides the convergence of the sequence of iterates generated by a fast algorithm for linearly constrained convex optimization problems without additional assumptions such as strong convexity. We also emphasize that all convergence results of this paper are compatible with the ones obtained in Boţ and Nguyen (J. Differential Equations 303:369-406, 2021) in the continuous setting.
Subgradient ellipsoid method for nonsmooth convex problems
In this paper, we present a new ellipsoid-type algorithm for solving nonsmooth problems with convex structure. Examples of such problems include nonsmooth convex minimization problems, convex-concave saddle-point problems and variational inequalities with monotone operator. Our algorithm can be seen as a combination of the standard Subgradient and Ellipsoid methods. However, in contrast to the latter one, the proposed method has a reasonable convergence rate even when the dimensionality of the problem is sufficiently large. For generating accuracy certificates in our algorithm, we propose an efficient technique, which ameliorates the previously known recipes (Nemirovski in Math Oper Res 35(1):52-78, 2010).
A technique for obtaining true approximations for -center with covering constraints
There has been a recent surge of interest in incorporating fairness aspects into classical clustering problems. Two recently introduced variants of the -Center problem in this spirit are Colorful -Center, introduced by Bandyapadhyay, Inamdar, Pai, and Varadarajan, and lottery models, such as the Fair Robust -Center problem introduced by Harris, Pensyl, Srinivasan, and Trinh. To address fairness aspects, these models, compared to traditional -Center, include additional covering constraints. Prior approximation results for these models require to relax some of the normally hard constraints, like the number of centers to be opened or the involved covering constraints, and therefore, only obtain constant-factor pseudo-approximations. In this paper, we introduce a new approach to deal with such covering constraints that leads to (true) approximations, including a 4-approximation for Colorful -Center with constantly many colors-settling an open question raised by Bandyapadhyay, Inamdar, Pai, and Varadarajan-and a 4-approximation for Fair Robust -Center, for which the existence of a (true) constant-factor approximation was also open. We complement our results by showing that if one allows an unbounded number of colors, then Colorful -Center admits no approximation algorithm with finite approximation guarantee, assuming that . Moreover, under the Exponential Time Hypothesis, the problem is inapproximable if the number of colors grows faster than logarithmic in the size of the ground set.
On the directional asymptotic approach in optimization theory
As a starting point of our research, we show that, for a fixed order , each local minimizer of a rather general nonsmooth optimization problem in Euclidean spaces is either M-stationary in the classical sense (corresponding to stationarity of order 1), satisfies stationarity conditions in terms of a coderivative construction of order , or is asymptotically stationary with respect to a critical direction as well as order in a certain sense. By ruling out the latter case with a constraint qualification not stronger than directional metric subregularity, we end up with new necessary optimality conditions comprising a mixture of limiting variational tools of orders 1 and . These abstract findings are carved out for the broad class of geometric constraints and , and visualized by examples from complementarity-constrained and nonlinear semidefinite optimization. As a byproduct of the particular setting , our general approach yields new so-called directional asymptotic regularity conditions which serve as constraint qualifications guaranteeing M-stationarity of local minimizers. We compare these new regularity conditions with standard constraint qualifications from nonsmooth optimization. Further, we extend directional concepts of pseudo- and quasi-normality to arbitrary set-valued mappings. It is shown that these properties provide sufficient conditions for the validity of directional asymptotic regularity. Finally, a novel coderivative-like variational tool is used to construct sufficient conditions for the presence of directional asymptotic regularity. For geometric constraints, it is illustrated that all appearing objects can be calculated in terms of initial problem data.
Gradient regularization of Newton method with Bregman distances
In this paper, we propose a first second-order scheme based on arbitrary non-Euclidean norms, incorporated by Bregman distances. They are introduced directly in the Newton iterate with regularization parameter proportional to the square root of the norm of the current gradient. For the basic scheme, as applied to the composite convex optimization problem, we establish the global convergence rate of the order both in terms of the functional residual and in the norm of subgradients. Our main assumption on the smooth part of the objective is Lipschitz continuity of its Hessian. For uniformly convex functions of degree three, we justify global linear rate, and for strongly convex function we prove the local superlinear rate of convergence. Our approach can be seen as a relaxation of the Cubic Regularization of the Newton method (Nesterov and Polyak in Math Program 108(1):177-205, 2006) for convex minimization problems. This relaxation preserves the convergence properties and global complexities of the Cubic Newton in convex case, while the auxiliary subproblem at each iteration is simpler. We equip our method with adaptive search procedure for choosing the regularization parameter. We propose also an accelerated scheme with convergence rate , where is the iteration counter.
Noisy tensor completion via the sum-of-squares hierarchy
In the noisy tensor completion problem we observe entries (whose location is chosen uniformly at random) from an unknown tensor . We assume that is entry-wise close to being rank . Our goal is to fill in its missing entries using as few observations as possible. Let . We show that if then there is a polynomial time algorithm based on the sixth level of the sum-of-squares hierarchy for completing it. Our estimate agrees with almost all of 's entries almost exactly and works even when our observations are corrupted by noise. This is also the first algorithm for tensor completion that works in the overcomplete case when , and in fact it works all the way up to . Our proofs are short and simple and are based on establishing a new connection between noisy tensor completion (through the language of Rademacher complexity) and the task of refuting random constraint satisfaction problems. This connection seems to have gone unnoticed even in the context of matrix completion. Furthermore, we use this connection to show matching lower bounds. Our main technical result is in characterizing the Rademacher complexity of the sequence of norms that arise in the sum-of-squares relaxations to the tensor nuclear norm. These results point to an interesting new direction: Can we explore computational vs. sample complexity tradeoffs through the sum-of-squares hierarchy?
On SOCP-based disjunctive cuts for solving a class of integer bilevel nonlinear programs
We study a class of integer bilevel programs with second-order cone constraints at the upper-level and a convex-quadratic objective function and linear constraints at the lower-level. We develop disjunctive cuts (DCs) to separate bilevel-infeasible solutions using a second-order-cone-based cut-generating procedure. We propose DC separation strategies and consider several approaches for removing redundant disjunctions and normalization. Using these DCs, we propose a branch-and-cut algorithm for the problem class we study, and a cutting-plane method for the problem variant with only binary variables. We present an extensive computational study on a diverse set of instances, including instances with binary and with integer variables, and instances with a single and with multiple linking constraints. Our computational study demonstrates that the proposed enhancements of our solution approaches are effective for improving the performance. Moreover, both of our approaches outperform a state-of-the-art generic solver for mixed-integer bilevel linear programs that is able to solve a linearized version of our binary instances.
Diverse collections in matroids and graphs
We investigate the parameterized complexity of finding diverse sets of solutions to three fundamental combinatorial problems. The input to the Weighted Diverse Bases problem consists of a matroid , a weight function , and integers . The task is to decide if there is a collection of of such that the weight of the symmetric difference of any pair of these bases is at least . The input to the Weighted Diverse Common Independent Sets problem consists of two matroids defined on the same ground set , a weight function , and integers . The task is to decide if there is a collection of of and such that the weight of the symmetric difference of any pair of these sets is at least . The input to the Diverse Perfect Matchings problem consists of a graph and integers . The task is to decide if contains such that the symmetric difference of any two of these matchings is at least . We show that none of these problems can be solved in polynomial time unless . We derive fixed-parameter tractable () algorithms for all three problems with as the parameter, and present a -sized kernel for Weighted Diverse Bases.
Better-than- -approximations for leaf-to-leaf tree and connectivity augmentation
The Connectivity Augmentation Problem (CAP) together with a well-known special case thereof known as the Tree Augmentation Problem (TAP) are among the most basic Network Design problems. There has been a surge of interest recently to find approximation algorithms with guarantees below 2 for both TAP and CAP, culminating in the currently best approximation factor for both problems of 1.393 through quite sophisticated techniques. We present a new and arguably simple matching-based method for the well-known special case of leaf-to-leaf instances. Combining our work with prior techniques, we readily obtain a -approximation for Leaf-to-Leaf CAP by returning the better of our solution and one of an existing method. Prior to our work, a -guarantee was only known for Leaf-to-Leaf TAP instances on trees of height 2. Moreover, when combining our technique with a recently introduced stack analysis approach, which is part of the above-mentioned 1.393-approximation, we can further improve the approximation factor to 1.29, obtaining for the first time a factor below for a nontrivial class of TAP/CAP instances.
Characterization of matrices with bounded Graver bases and depth parameters and applications to integer programming
An intensive line of research on fixed parameter tractability of integer programming is focused on exploiting the relation between the sparsity of a constraint matrix and the norm of the elements of its Graver basis. In particular, integer programming is fixed parameter tractable when parameterized by the primal tree-depth and the entry complexity of , and when parameterized by the dual tree-depth and the entry complexity of ; both these parameterization imply that is sparse, in particular, the number of its non-zero entries is linear in the number of columns or rows, respectively. We study preconditioners transforming a given matrix to a row-equivalent sparse matrix if it exists and provide structural results characterizing the existence of a sparse row-equivalent matrix in terms of the structural properties of the associated column matroid. In particular, our results imply that the -norm of the Graver basis is bounded by a function of the maximum -norm of a circuit of . We use our results to design a parameterized algorithm that constructs a matrix row-equivalent to an input matrix that has small primal/dual tree-depth and entry complexity if such a row-equivalent matrix exists. Our results yield parameterized algorithms for integer programming when parameterized by the -norm of the Graver basis of the constraint matrix, when parameterized by the -norm of the circuits of the constraint matrix, when parameterized by the smallest primal tree-depth and entry complexity of a matrix row-equivalent to the constraint matrix, and when parameterized by the smallest dual tree-depth and entry complexity of a matrix row-equivalent to the constraint matrix.
High-order methods beyond the classical complexity bounds: inexact high-order proximal-point methods
We introduce a (BiOPT) framework for minimizing the sum of two convex functions, where one of them is smooth enough. The BiOPT framework offers three levels of freedom: (i) choosing the order of the proximal term; (ii) designing an inexact th-order proximal-point method in the upper level; (iii) solving the auxiliary problem with a lower-level non-Euclidean method in the lower level. We here regularize the objective by a th-order proximal term (for arbitrary integer ) and then develop the generic inexact high-order proximal-point scheme and its acceleration using the standard estimating sequence technique at the upper level. This follows at the lower level with solving the corresponding th-order proximal auxiliary problem inexactly either by one iteration of the th-order tensor method or by a lower-order non-Euclidean composite gradient scheme. Ultimately, it is shown that applying the accelerated inexact th-order proximal-point method at the upper level and handling the auxiliary problem by the non-Euclidean composite gradient scheme lead to a 2-order method with the convergence rate (for and the iteration counter ), which can result to a superfast method for some specific class of problems.
Complexity of linear relaxations in integer programming
For a set of integer points in a polyhedron, the smallest number of facets of any polyhedron whose set of integer points coincides with is called the relaxation complexity . This parameter, introduced by Kaibel & Weltge (2015), captures the complexity of linear descriptions of without using auxiliary variables. Using tools from combinatorics, geometry of numbers, and quantifier elimination, we make progress on several open questions regarding and its variant , restricting the descriptions of to rational polyhedra. As our main results we show that when: (a) is at most four-dimensional, (b) represents every residue class in , (c) the convex hull of contains an interior integer point, or (d) the lattice-width of is above a certain threshold. Additionally, can be algorithmically computed when is at most three-dimensional, or satisfies one of the conditions (b), (c), or (d) above. Moreover, we obtain an improved lower bound on in terms of the dimension of .
Truthful ownership transfer with expert advice
When a company undergoes a merger or transfers its ownership, the existing governing body has an opinion on which buyer should take over as the new owner. Similar situations occur while assigning the host of big sports tournaments, like the World Cup or the Olympics. In all these settings, the values of the external bidders are as important as the opinions of the internal experts. Motivated by such scenarios, we consider a social welfare maximizing approach to design and analyze truthful mechanisms in settings, where payments can be imposed to the bidders, but not to the experts. Since this problem is a combination of mechanism design with and without monetary transfers, classical solutions like VCG cannot be applied, making this a novel mechanism design problem. We consider the simple but fundamental scenario with one expert and two bidders, and provide tight approximation guarantees of the optimal social welfare. We distinguish between mechanisms that use ordinal and cardinal information, as well as between mechanisms that base their decisions on one of the two sides (either the bidders or the expert) or both. Our analysis shows that the cardinal setting is quite rich and admits several non-trivial randomized truthful mechanisms, and also allows for closer-to-optimal welfare guarantees.
Matroid bases with cardinality constraints on the intersection
Given two matroids and on a common ground set with base sets and , some integer , and two cost functions , we consider the optimization problem to find a basis and a basis minimizing the cost subject to either a lower bound constraint , an upper bound constraint , or an equality constraint on the size of the intersection of the two bases and . The problem with lower bound constraint turns out to be a generalization of the Recoverable Robust Matroid problem under interval uncertainty representation for which the question for a strongly polynomial-time algorithm was left as an open question in Hradovich et al. (J Comb Optim 34(2):554-573, 2017). We show that the two problems with lower and upper bound constraints on the size of the intersection can be reduced to weighted matroid intersection, and thus be solved with a strongly polynomial-time primal-dual algorithm. We also present a strongly polynomial, primal-dual algorithm that computes a minimum cost solution for every feasible size of the intersection in one run with asymptotic running time equal to one run of Frank's matroid intersection algorithm. Additionally, we discuss generalizations of the problems from matroids to polymatroids, and from two to three or more matroids. We obtain a strongly polynomial time algorithm for the recoverable robust polymatroid base problem with interval uncertainties.
Phragmén's voting methods and justified representation
In the late 19th century, Swedish mathematician Edvard Phragmén proposed a load-balancing approach for selecting committees based on approval ballots. We consider three committee voting rules resulting from this approach: two optimization variants-one minimizing the maximum load and one minimizing the variance of loads-and a sequential variant. We study Phragmén 's methods from an axiomatic point of view, focusing on properties capturing proportional representation. We show that the sequential variant satisfies , which is a rare property for committee monotonic methods. Moreover, we show that the optimization variants satisfy . We also analyze the computational complexity of Phragmén 's methods and provide mixed-integer programming based algorithms for computing them.
Local convergence of tensor methods
In this paper, we study local convergence of high-order Tensor Methods for solving convex optimization problems with composite objective. We justify local superlinear convergence under the assumption of uniform convexity of the smooth component, having Lipschitz-continuous high-order derivative. The convergence both in function value and in the norm of minimal subgradient is established. Global complexity bounds for the Composite Tensor Method in convex and uniformly convex cases are also discussed. Lastly, we show how local convergence of the methods can be globalized using the inexact proximal iterations.
Facial reduction for symmetry reduced semidefinite and doubly nonnegative programs
We consider both facial reduction, , and symmetry reduction, , techniques for semidefinite programming, . We show that the two together fit surprisingly well in an alternating direction method of multipliers, , approach. In fact, this approach allows for simply adding on nonnegativity constraints, and solving the doubly nonnegative, , relaxation of many classes of hard combinatorial problems. We also show that the singularity degree remains the same after , and that the relaxations considered here have singularity degree one, that is reduced to zero after . The combination of and leads to a significant improvement in both numerical stability and running time for both the and interior point approaches. We test our method on various relaxations of hard combinatorial problems including quadratic assignment problems with sizes of more than . This translates to a semidefinite constraint of order 250, 000 and nonnegative constrained variables, before applying the reduction techniques.
Fair colorful -center clustering
An instance of - consists of points in a metric space that are colored red or blue, along with an integer and a coverage requirement for each color. The goal is to find the smallest radius such that there exist balls of radius around of the points that meet the coverage requirements. The motivation behind this problem is twofold. First, from fairness considerations: each color/group should receive a similar service guarantee, and second, from the algorithmic challenges it poses: this problem combines the difficulties of clustering along with the subset-sum problem. In particular, we show that this combination results in strong integrality gap lower bounds for several natural linear programming relaxations. Our main result is an efficient approximation algorithm that overcomes these difficulties to achieve an approximation guarantee of 3, nearly matching the tight approximation guarantee of 2 for the classical -center problem which this problem generalizes. algorithms either opened more than centers or only worked in the special case when the input points are in the plane.
Automated tight Lyapunov analysis for first-order methods
We present a methodology for establishing the existence of quadratic Lyapunov inequalities for a wide range of first-order methods used to solve convex optimization problems. In particular, we consider (i) classes of optimization problems of finite-sum form with (possibly strongly) convex and possibly smooth functional components, (ii) first-order methods that can be written as a linear system on state-space form in feedback interconnection with the subdifferentials of the functional components of the objective function, and (iii) quadratic Lyapunov inequalities that can be used to draw convergence conclusions. We present a necessary and sufficient condition for the existence of a quadratic Lyapunov inequality within a predefined class of Lyapunov inequalities, which amounts to solving a small-sized semidefinite program. We showcase our methodology on several first-order methods that fit the framework. Most notably, our methodology allows us to significantly extend the region of parameter choices that allow for duality gap convergence in the Chambolle-Pock method when the linear operator is the identity mapping.