OPTIMIZATION METHODS & SOFTWARE

Self-consistent gradient flow for shape optimization
Kraft D
We present a model for image segmentation and describe a gradient-descent method for level-set based shape optimization. It is commonly known that gradient-descent methods converge slowly due to zig-zag movement. This can also be observed for our problem, especially when sharp edges are present in the image. We interpret this in our specific context to gain a better understanding of the involved difficulties. One way to overcome slow convergence is the use of second-order methods. For our situation, they require derivatives of the potentially noisy image data and are thus undesirable. Hence, we propose a new method that can be interpreted as a self-consistent gradient flow and does not need any derivatives of the image data. It works very well in practice and leads to a far more efficient optimization algorithm. A related idea can also be used to describe the mean-curvature flow of a mean-convex surface. For this, we formulate a mean-curvature Eikonal equation, which allows a numerical propagation of the mean-curvature flow of a surface without explicit time stepping.
Enhanced Bounding Techniques to Reduce the Protein Conformational Search Space
McAllister SR and Floudas CA
The complexity and enormous size of the conformational space that must be explored for the protein tertiary structure prediction problem has led to the development of a wide assortment of algorithmic approaches. In this study, we apply state-of-the-art tertiary structure prediction algorithms and instead focus on the development of bounding techniques to reduce the conformational search space. Dihedral angle bounds on the ϕ and ψ angles are established based on the predicted secondary structure and studies of the allowed regions of ϕ/ψ space. Distance bounds are developed based on predicted secondary structure information (including β-sheet topology predictions) to further reduce the search space. This bounding strategy is entirely independent of the degree of homology between the target protein and the database of proteins with experimentally-determined structures. The proposed approach is applied to the structure prediction of protein G as an illustrative example, yielding a significantly higher number of near-native protein tertiary structure predictions.
Proximal extrapolated gradient methods for variational inequalities
Malitsky Y
The paper concerns with novel first-order methods for monotone variational inequalities. They use a very simple linesearch procedure that takes into account a local information of the operator. Also, the methods do not require Lipschitz continuity of the operator and the linesearch procedure uses only values of the operator. Moreover, when the operator is affine our linesearch becomes very simple, namely, it needs only simple vector-vector operations. For all our methods, we establish the ergodic convergence rate. In addition, we modify one of the proposed methods for the case of a composite minimization. Preliminary results from numerical experiments are quite promising.
Inducing strong convergence into the asymptotic behaviour of proximal splitting algorithms in Hilbert spaces
Boţ RI, Csetnek ER and Meier D
Proximal splitting algorithms for monotone inclusions (and convex optimization problems) in Hilbert spaces share the common feature to guarantee for the generated sequences in general weak convergence to a solution. In order to achieve strong convergence, one usually needs to impose more restrictive properties for the involved operators, like strong monotonicity (respectively, strong convexity for optimization problems). In this paper, we propose a modified Krasnosel'skiĭ-Mann algorithm in connection with the determination of a fixed point of a nonexpansive mapping and show strong convergence of the iteratively generated sequence to the minimal norm solution of the problem. Relying on this, we derive a forward-backward and a Douglas-Rachford algorithm, both endowed with Tikhonov regularization terms, which generate iterates that strongly converge to the minimal norm solution of the set of zeros of the sum of two maximally monotone operators. Furthermore, we formulate strong convergent primal-dual algorithms of forward-backward and Douglas-Rachford-type for highly structured monotone inclusion problems involving parallel-sums and compositions with linear operators. The resulting iterative schemes are particularized to the solving of convex minimization problems. The theoretical results are illustrated by numerical experiments on the split feasibility problem in infinite dimensional spaces.