Preface

Superiorization: theory and applications

, and

Published 1 March 2017 © 2017 IOP Publishing Ltd
, , Special issue on superiorization: theory and applications Citation Yair Censor et al 2017 Inverse Problems 33 040301 DOI 10.1088/1361-6420/aa5deb

0266-5611/33/4/040301

Export citation and abstract BibTeX RIS

 

The superiorization methodology is used for improving the efficacy of iterative algorithms whose convergence is resilient to certain kinds of perturbations. Such perturbations are designed to 'force' the perturbed algorithm to produce more useful results for the intended application than the ones that are produced by the original iterative algorithm. The perturbed algorithm is called the 'superiorized version' of the original unperturbed algorithm. If the original algorithm is computationally efficient and useful in terms of the application at hand and if the perturbations are simple and not expensive to calculate, then the advantage of this method is that, for essentially the computational cost of the original algorithm, we are able to get something more desirable by steering its iterates according to the designed perturbations.

This is a very general principle that has been used successfully in some important practical applications, especially for inverse problems such as image reconstruction from projections, intensity-modulated radiation therapy and nondestructive testing, and awaits to be implemented and tested in additional fields.

An important case is when the original algorithm is 'feasibility-seeking' (in the sense that it strives to find some point that is compatible with a family of constraints) and the perturbations that are introduced into the original iterative algorithm aim at reducing (not necessarily minimizing) a given merit function. In this case superiorization has a unique place in optimization theory and practice.

Many constrained optimization methods are based on methods for unconstrained optimization that are adapted to deal with constraints. Such is, for example, the class of projected gradient methods wherein the unconstrained minimization inner step 'leads' the process and a projection onto the whole constraints set (the feasible set) is performed after each minimization step in order to regain feasibility. This projection onto the constraints set is in itself a non-trivial optimization problem and the need to solve it in every iteration hinders projected gradient methods and limits their efficiency to only feasible sets that are 'simple to project onto.' Barrier or penalty methods likewise are based on unconstrained optimization combined with various 'add-on's that guarantee that the constraints are preserved. Regularization methods embed the constraints into a 'regularized' objective function and proceed with unconstrained solution methods for the new regularized objective function.

In contrast to these approaches, the superiorization methodology can be viewed as an antipodal way of thinking. Instead of adapting unconstrained minimization algorithms to handling constraints, it adapts feasibility-seeking algorithms to reduce merit function values. This is done while retaining the feasibility-seeking nature of the algorithm and without paying a high computational price. Furthermore, general-purpose approaches have been developed for automatically superiorizing iterative algorithms for large classes of constraints sets and merit functions; these provide algorithms for many application tasks.

To a novice on the superiorization methodology and perturbation resilience of algorithms we recommend to read first the recent reviews in [13]. Current work on superiorization can be appreciated from the continuously updated Internet page [4]. For a recent description of previous work that is related to superiorization but is not included in [4] we direct the reader to [5, section 3].

The aim of this special issue is to promote new directions that will advance research on superiorization and its applications. Thus, it includes papers on various topics such as bounded perturbation resilience of algorithms in several fields, other than feasibility-seeking algorithms, analysis of superiorized algorithms, as well as papers that present new computational work based on superiorization. We hope that this special issue will increase the visibility of the superiorization methodology to mathematicians who will investigate its underlying mathematical basis and to practitioners who will find the method useful for the solution of problems in real-world applications. The recent release of SNARK14 software package [6], with its in-built capability to superiorize iterative algorithms to improve their performance, can be helpful to practitioners in this respect.

We thank the authors and the many reviewers who responded to our requests and took the time and effort to produce and improve the papers in this issue.

Acknowledgments

The work of Y Censor was supported by Research Grant No. 2013003 of the United States-Israel Binational Science Foundation (BSF).

Please wait… references are loading.
10.1088/1361-6420/aa5deb