JOURNAL OF GLOBAL OPTIMIZATION

Exchange rates and multicommodity international trade: insights from spatial price equilibrium modeling with policy instruments via variational inequalities
Nagurney A, Hassani D, Nivievskyi O and Martyshev P
In this paper, we construct a multicommodity international trade spatial price equilibrium model of special relevance to agriculture in which exchange rates are included along with policy instruments in the form of tariffs, subsidies as well as quotas. The model allows for multiple trade routes between country origin nodes and country destination nodes and these trade routes can include different modes of transportation and transport through distinct countries. We capture the impacts of exchange rates through the definition of effective path costs and identify the governing multicommodity international trade spatial price equilibrium conditions, which are then formulated as a variational inequality problem in product path flows. Existence results are established and a computational procedure presented. The illustrative numerical examples and a case study are inspired by the impacts of the war against Ukraine on agricultural trade flows and product prices. The modeling and algorithmic framework allows for the quantification of the impacts of exchange rates and various trade policies, as well as the addition or deletion of supply markets, demand markets and/or routes, on supply and demand market prices in local currencies, and on the volume of product trade flows with implications for food security.
A three-stage stochastic optimization model integrating 5G technology and UAVs for disaster management
Colajanni G, Daniele P, Nagurney A, Nagurney LS and Sciacca D
In this paper, we develop a three-stage stochastic network-based optimization model for the provision of 5G services with Unmanned Aerial Vehicles (UAVs) in the disaster management phases of: preparedness, response and recover/reconstruction. Users or devices on the ground request services of a fleet of controller UAVs in flight and the requested services are executed by a fleet of UAVs organized as a Flying Ad-Hoc Network and interconnected via 5G technology. A disaster scenario can create difficulties for the provision of services by service providers. For this reason, in the first stage, service providers make predictions about possible scenarios in the second stage. Therefore, the first stage represents the preparedness phase, the second stage represents the response phase, followed by the recovery/reconstruction phase, represented by the third stage. In each of the three stages, service providers seek to maximize the amount of services to be performed, assigning each service a priority. They also aim to, simultaneously, minimize the total management costs of requests, the transmission and execution costs of services, the costs to increase the resources of the pre-existing network and, if need be, to reduce them in the recovery/reconstruction phase. For the proposed multi-stage stochastic optimization model, we provide variational formulations for which we investigate the existence and uniqueness of the solution. Finally, a detailed numerical example is solved in order underline some of the key aspects of the model. This paper adds to the literature on the rigorous mathematical modeling of advanced technologies for disaster management.
NMR Assignment through Linear Programming
Bravo-Ferreira JFS, Cowburn D, Khoo Y and Singer A
Nuclear Magnetic Resonance (NMR) Spectroscopy is the second most used technique (after X-ray crystallography) for structural determination of proteins. A computational challenge in this technique involves solving a discrete optimization problem that assigns the resonance frequency to each atom in the protein. This paper introduces LIAN (LInear programming Assignment for NMR), a novel linear programming formulation of the problem which yields state-of-the-art results in simulated and experimental datasets.
Distributionally robust mean-absolute deviation portfolio optimization using wasserstein metric
Chen D, Wu Y, Li J, Ding X and Chen C
Data uncertainty has a great impact on portfolio selection. Based on the popular mean-absolute deviation (MAD) model, we investigate how to make robust portfolio decisions. In this paper, a novel Wasserstein metric-based data-driven distributionally robust mean-absolute deviation (DR-MAD) model is proposed. However, the proposed model is non-convex with an infinite-dimensional inner problem. To solve this model, we prove that it can be transformed into two simple finite-dimensional linear programs. Consequently, the problem can be solved as easily as solving the classic MAD model. Furthermore, the proposed DR-MAD model is compared with the 1/N, classic MAD and mean-variance model on S &P 500 constituent stocks in six different settings. The experimental results show that the portfolios constructed by DR-MAD model are superior to the benchmarks in terms of profitability and stability in most fluctuating markets. This result suggests that Wasserstein distributionally robust optimization framework is an effective approach to address data uncertainty in portfolio optimization.
DOMINO: Data-driven Optimization of bi-level Mixed-Integer NOnlinear Problems
Beykal B, Avraamidou S, Pistikopoulos IPE, Onel M and Pistikopoulos EN
The Data-driven Optimization of bi-level Mixed-Integer NOnlinear problems (DOMINO) framework is presented for addressing the optimization of bi-level mixed-integer nonlinear programming problems. In this framework, bi-level optimization problems are approximated as single-level optimization problems by collecting samples of the upper-level objective and solving the lower-level problem to global optimality at those sampling points. This process is done through the integration of the DOMINO framework with a grey-box optimization solver to perform design of experiments on the upper-level objective, and to consecutively approximate and optimize bi-level mixed-integer nonlinear programming problems that are challenging to solve using exact methods. The performance of DOMINO is assessed through solving numerous bi-level benchmark problems, a land allocation problem in Food-Energy-Water Nexus, and through employing different data-driven optimization methodologies, including both local and global methods. Although this data-driven approach cannot provide a theoretical guarantee to global optimality, we present an algorithmic advancement that can guarantee feasibility to large-scale bi-level optimization problems when the lower-level problem is solved to global optimality at convergence.
A new mathematical approach for handling DVH criteria in IMRT planning
Scherrer A, Yaneva F, Grebe T and Küfer KH
The appropriate handling of planning criteria on the cumulative dose-volume histogram (DVH) is a highly problematic issue in intensity-modulated radiation therapy (IMRT) plan optimization. The nonconvexity of DVH criteria and globality of the resulting optimization problems complicate the design of suitable optimization methods, which feature numerical efficiency, reliable convergence and optimality of the results. This work examines the mathematical structure of DVH criteria and proves the valuable properties of isotonicity/antitonicity, connectedness, invexity and sufficiency of the KKT condition. These properties facilitate the use of efficient and goal-oriented optimization methods. An exemplary algorithmic realization with feasible direction methods gives rise to a functional framework for interactive IMRT planning on DVH criteria. Numerical examples on real world planning cases prove its practical capability.
Chebyshev model arithmetic for factorable functions
Rajyaguru J, Villanueva ME, Houska B and Chachuat B
This article presents an arithmetic for the computation of Chebyshev models for factorable functions and an analysis of their convergence properties. Similar to Taylor models, Chebyshev models consist of a pair of a multivariate polynomial approximating the factorable function and an interval remainder term bounding the actual gap with this polynomial approximant. Propagation rules and local convergence bounds are established for the addition, multiplication and composition operations with Chebyshev models. The global convergence of this arithmetic as the polynomial expansion order increases is also discussed. A generic implementation of Chebyshev model arithmetic is available in the library MC++. It is shown through several numerical case studies that Chebyshev models provide tighter bounds than their Taylor model counterparts, but this comes at the price of extra computational burden.
Refugee migration networks and regulations: a multiclass, multipath variational inequality framework
Nagurney A, Daniele P and Nagurney LS
In this paper, we take up the timely topic of the modeling, analysis, and solution of refugee migration networks. We construct a general, multiclass, multipath model, determine the governing equilibrium conditions, and provide alternative variational inequality formulations in path flows and in link flows. We also demonstrate how governmental imposed regulations associated with refugees can be captured via constraints. We provide qualitative properties and then establish, via a supernetwork transformation, that the model(s) are isomorphic to traffic network equilibrium models with fixed demands. Illustrative examples are given, along with numerical examples, inspired by a refugee crisis from Mexico to the United States, which are solved using the Euler method embedded with exact equilibration. The work sets the foundation for the development of additional models and algorithms and also provides insights as to who wins and who loses under certain refugee regulations.
Piecewise parametric structure in the pooling problem: from sparse strongly-polynomial solutions to NP-hardness
Baltean-Lugojan R and Misener R
The standard pooling problem is a NP-hard subclass of non-convex quadratically-constrained optimization problems that commonly arises in process systems engineering applications. We take a parametric approach to uncovering topological structure and sparsity, focusing on the single quality standard pooling problem in its -formulation. The structure uncovered in this approach validates Professor Christodoulos A. Floudas' intuition that pooling problems are rooted in piecewise-defined functions. We introduce dominant active topologies under relaxed flow availability to explicitly identify pooling problem sparsity and show that the sparse patterns of active topological structure are associated with a piecewise objective function. Finally, the paper explains the conditions under which sparsity vanishes and where the combinatorial complexity emerges to cross over the  /  boundary. We formally present the results obtained and their derivations for various specialized single quality pooling problem subclasses.
Enclosure of all index-1 saddle points of general nonlinear functions
Nerantzis D and Adjiman CS
Transition states (index-1 saddle points) play a crucial role in determining the rates of chemical transformations but their reliable identification remains challenging in many applications. Deterministic global optimization methods have previously been employed for the location of transition states (TSs) by initially finding all stationary points and then identifying the TSs among the set of solutions. We propose several regional tests, applicable to general nonlinear, twice continuously differentiable functions, to accelerate the convergence of such approaches by identifying areas that do not contain any TS or that may contain a unique TS. The tests are based on the application of the interval extension of theorems from linear algebra to an interval Hessian matrix. They can be used within the framework of global optimization methods with the potential of reducing the computational time for TS location. We present the theory behind the tests, discuss their algorithmic complexity and show via a few examples that significant gains in computational time can be achieved by using these tests.
A computational study of global optimization solvers on two trust region subproblems
Montanher T, Neumaier A and Domes F
One of the relevant research topics to which Chris Floudas contributed was quadratically constrained quadratic programming (QCQP). This paper considers one of the simplest hard cases of QCQP, the two trust region subproblem (TTRS). In this case, one needs to minimize a quadratic function constrained by the intersection of two ellipsoids. The Lagrangian dual of the TTRS is a semidefinite program (SDP) and this result has been extensively used to solve the problem efficiently. We focus on numerical aspects of branch-and-bound solvers with three goals in mind. We provide (i) a detailed analysis of the ability of state-of-the-art solvers to complete the global search for a solution, (ii) a quantitative approach for measuring the cluster effect on each solver and (iii) a comparison between the branch-and-bound and the SDP approaches. We perform the numerical experiments on a set of 212 challenging problems provided by Kurt Anstreicher. Our findings indicate that SDP relaxations and branch-and-bound have orthogonal difficulties, thus pointing to a possible benefit of a combined method. The following solvers were selected for the experiments: , , , and .
Arbitrarily tight BB underestimators of general non-linear functions over sub-optimal domains
Kazazakis N and Adjiman CS
In this paper we explore the construction of arbitrarily tight BB relaxations of general non-linear non-convex functions. We illustrate the theoretical challenges of building such relaxations by deriving conditions under which it is possible for an BB underestimator to provide exact bounds. We subsequently propose a methodology to build BB underestimators which may be arbitrarily tight (i.e., the maximum separation distance between the original function and its underestimator is arbitrarily close to 0) in some domains that do not include the global solution (defined in the text as "sub-optimal"), assuming exact eigenvalue calculations are possible. This is achieved using a transformation of the original function into a - function and the derivation of BB underestimators for the new function. We prove that this transformation results in a number of desirable bounding properties in certain domains. These theoretical results are validated in computational test cases where approximations of the tightest possible -subenergy underestimators, derived using sampling, are compared to similarly derived approximations of the tightest possible classical BB underestimators. Our tests show that -subenergy underestimators produce much tighter bounds, and succeed in fathoming nodes which are impossible to fathom using classical BB.
A manifold-based approach to sparse global constraint satisfaction problems
Baharev A, Neumaier A and Schichl H
We consider square, sparse nonlinear systems of equations whose Jacobian is structurally nonsingular, with reasonable bound constraints on all variables. We propose an algorithm for finding good approximations to all well-separated solutions of such systems. We assume that the input system is ordered such that its Jacobian is in bordered block lower triangular form with small diagonal blocks and with small border width; this can be performed fully automatically with off-the-shelf decomposition methods. Five decades of numerical experience show that models of technical systems tend to decompose favorably in practice. Once the block decomposition is available, we reduce the task of solving the large nonlinear system of equations to that of solving a sequence of low-dimensional ones. The most serious weakness of this approach is well-known: It may suffer from severe numerical instability. The proposed method resolves this issue with the novel backsolve step. We study the effect of the decomposition on a sequence of challenging problems. Beyond a certain problem size, the computational effort of multistart (no decomposition) grows exponentially. In contrast, thanks to the decomposition, for the proposed method the computational effort grows only linearly with the problem size. It depends on the problem size and on the hyperparameter settings whether the decomposition and the more sophisticated algorithm pay off. Although there is no theoretical guarantee that all solutions will be found in the general case, increasing the so-called sample size hyperparameter improves the robustness of the proposed method.
Rigorous packing of unit squares into a circle
Montanher T, Neumaier A, Csaba Markót M, Domes F and Schichl H
This paper considers the task of finding the smallest circle into which one can pack a fixed number of non-overlapping unit squares that are free to rotate. Due to the rotation angles, the packing of unit squares into a container is considerably harder to solve than their circle packing counterparts. Therefore, optimal arrangements were so far proved to be optimal only for one or two unit squares. By a computer-assisted method based on interval arithmetic techniques, we solve the case of three squares and find rigorous enclosures for every optimal arrangement of this problem. We model the relation between the squares and the circle as a constraint satisfaction problem (CSP) and found every box that may contain a solution inside a given upper bound of the radius. Due to symmetries in the search domain, general purpose interval methods are far too slow to solve the CSP directly. To overcome this difficulty, we split the problem into a set of subproblems by systematically adding constraints to the center of each square. Our proof requires the solution of 6, 43 and 12 subproblems with 1, 2 and 3 unit squares respectively. In principle, the method proposed in this paper generalizes to any number of squares.
Which graphs are rigid in ?
Dewar S, Kitson D and Nixon A
We present three results which support the conjecture that a graph is minimally rigid in -dimensional -space, where and , if and only if it is (, )-tight. Firstly, we introduce a graph bracing operation which preserves independence in the generic rigidity matroid when passing from to . We then prove that every (, )-sparse graph with minimum degree at most and maximum degree at most is independent in . Finally, we prove that every triangulation of the projective plane is minimally rigid in . A catalogue of rigidity preserving graph moves is also provided for the more general class of strictly convex and smooth normed spaces and we show that every triangulation of the sphere is independent for 3-dimensional spaces in this class.
An extension of the proximal point algorithm beyond convexity
Grad SM and Lara F
We introduce and investigate a new generalized convexity notion for functions called prox-convexity. The proximity operator of such a function is single-valued and firmly nonexpansive. We provide examples of (strongly) quasiconvex, weakly convex, and DC (difference of convex) functions that are prox-convex, however none of these classes fully contains the one of prox-convex functions or is included into it. We show that the classical proximal point algorithm remains convergent when the convexity of the proper lower semicontinuous function to be minimized is relaxed to prox-convexity.
The EMS vehicle patient transportation problem during a demand surge
Majzoubi F, Bai L and Heragu SS
We consider a real-time emergency medical service (EMS) vehicle patient transportation problem in which vehicles are assigned to patients so they can be transported to hospitals during an emergency. The objective is to minimize the total travel time of all vehicles while satisfying two types of time window constraints. The first requires each EMS vehicle to arrive at a patient's location within a specified time window. The second requires the vehicle to arrive at the designated hospital within another time window. We allow an EMS vehicle to serve up to two patients instead of just one. The problem is shown to be NP-complete. We, therefore, develop a simulated annealing (SA) heuristic for efficient solution in real-time. A column generation algorithm is developed for determining a tight lower bound. Numerical results show that the proposed SA heuristic provides high-quality solutions in much less CPU time, when compared to the general-purpose solver. Therefore, it is suitable for implementation in a real-time decision support system, which is available via a web portal (www.rtdss.org).
Improved interval methods for solving circle packing problems in the unit square
Markót MC
In this work computer-assisted optimality proofs are given for the problems of finding the densest packings of 31, 32, and 33 non-overlapping equal circles in a square. In a study of 2005, a fully interval arithmetic based global optimization method was introduced for the problem class, solving the cases 28, 29, 30. Until now, these were the largest problem instances solved on a computer. Using the techniques of that paper, the estimated solution time for the next three cases would have been 3-6 CPU months. In the present paper this former method is improved in both its local and global search phases. We discuss a new interval-based polygon representation of the core local method for eliminating suboptimal regions, which has a simpler implementation, easier proof of correctness, and faster behaviour than the former one. Furthermore, a modified strategy is presented for the global phase of the search, including improved symmetry filtering and tile pattern matching. With the new method the cases have been solved in 26, 61, and 13 CPU hours, giving high precision enclosures for all global optimizers and the optimum value. After eliminating the hardware and compiler improvements since the former study, the new proof technique became roughly about 40-100 times faster than the previous one. In addition, the new implementation is suitable for solving the next few circle packing instances with similar computational effort.
Supply chain networks, wages, and labor productivity: insights from Lagrange. analysis and computations
Nagurney A
The COVID-19 pandemic has dramatically demonstrated the importance of labor to supply chain network activities from production to distribution with shortfalls in labor availability, for numerous reasons, resulting in product shortages and the reduction of profits of firms. Even as progress has been made through vaccinations, issues associated with labor are still arising. Increasing wages is a strategy to enhance labor productivity and, also to ameliorate, in part, labor shortages, but has not, until this work, been explored in a full supply chain network context. Specifically, in this paper, a game theory supply chain network model is constructed of firms competing in producing a substitutable, but differentiated, product, and seeking to determine their equilibrium product path flows, as well as hourly wages to pay their workers, under fixed labor amounts associated with links, and wage-responsive productivity factors. The theoretical and computational approach utilizes the theory of variational inequalities. We first introduce a model without wage bounds on links and then extend it to include wage bounds. Lagrange analysis is conducted for the latter model, which yields interesting insights, as well as an alternative variational inequality formulation. A series of numerical examples reveals that firms can gain in terms of profits by being willing to pay higher wages, resulting in benefits also for their workers, as well as consumers, who enjoy lower demand market prices for the products. However, sensitivity analysis should be conducted to determine the range of such wage bounds. Ultimately, we observed, that the profits may decrease and then stabilize. This work adds to the literature on the integration of concepts from economics and operations research for supply chain networks and also has policy implications.