Image deblurring: comparison and analysis

Technological advancements and the advent of digital devices and media make images an important part of today’s social life. Image blurring is a common challenge that results from multiple factors such as object movement, camera shake, and raindrops, among others. Image deblurring has progressively become an important field of image restoration as directed by research findings. After research for more than five decades, significant research efforts have yielded useful technologies of image deblurring. This article provides an overview of the current knowledge on image deblurring technology by focusing on the classical methods and modern trends in the field. The article reviews the conventional methods and achievements made in past studies using evidence from 34 scholarly articles. The article also examines the application of algorithms in specific deblurring methodologies adopted in recent works. It covers the recent trend of learning-based models used to restore images and their effectiveness. They include Convolutional Neural Networks, Recurrent Neural Networks and Graph Convolutional Networks. Novel deep-learning deblurring techniques are also explored. Based on the findings, issues of concerns, opportunities and direction for future research are provided to advance image deblurring technologies.


Introduction
Deblurring image is a long-standing area of research interest [1].It is one of the key parts of the image processing realm focused on restoring an image by recreating blurred images into decipherable ones.Digital images are likely to exhibit blur due in the form of motion blur, gaussian blur and average blur [1].Motion blur results from relative motion between the image-capturing system and the image, and its restoration requires an estimation of the motion path.Gaussian blur involves the mathematical function applied to an image to cause a blur.Gaussian blur is mostly applied in reducing noise and details by unifying pixels [2].Average blur results from the set of pixels equal to the value of the average pixel in a particular box neighborhood [3].The average filter can reduce noise in an average blur.Because of the image degradations from different blur scenarios, the non-readability of blurred images often affects the algorithmic output.Image restoration is a field focused on reducing the degradation of input images.In this field, different techniques are applied to restore the degraded images into clear versions.Although there is not a sole generic model of deblurring images, this is a well-established field in literature.A plausible explanation for the absence of a single deblurring model is that deblurring can exist in different forms.Deriving an equation from the multiple forms is a challenge that is yet to manifest in the extant literature.Similar to the blur types, noise also contributes to image degradation

Image deblurring methods
Two broad classifications of deblurring techniques are blind deblurring (BD) and non-blind deblurring (NBD).While BD mainly focuses on recovering the exact estimate of the blur kernel, NBD aims to restore the original image from a known blur image and give the blur estimate [4].The methods are explored below.

Blind Image Deblurring
BD is a classical method of restoring an image where there is no information about the blurring and noise that cause image degradation.Deblurring images using this method is achieved without a known point of spread function (PSF) [5].PSF is a "point input, represented as a single pixel in the "ideal" image, which will be reproduced as something other than a single pixel in the "real" image" [6].The application of blind methods is broader than non-blind methods.In most cases, the PSF is known not to provide accuracy.Because NBD techniques are sensitive to incongruities between the point of spread function in BD and the actual blurring point of spread, lack of blurring PSF knowledge often results in poor deblurring outcomes.
Two techniques can be categorized in BD.The first involves the making of initial estimations in restoring the true blurred image and PSF [1,2].It follows an iterative process until predefined match criteria are achieved.The advantage of this approach is that it is not sensitive to noise.It follows a synchronized evaluation of the image output and PSF, leading to a more complex computational algorithm.The second approach is based on the maximum likelihood of restoration [2].The approach involves an estimation of the indicators such as covariance and PSF matrices.Estimations of PSF use simple algorithms with minimal computational sophistication to obtain the blur and noise of the original image.A representative approach for BD is 'deep unrolling for blind deblurring (DUBLID)' a neural network architecture [7].Blind deconvolution algorithm is also applied in deblurring images when there is no known information about distortion [8].
Shan et al. [8] suggested the deblurring method using a unified probabilistic equation to estimate the kernel and restoration image.The proposed algorithm included terms such as spatial noise randomness and smoothness prior that limited contrasts in the unblurred image when the blurred image exhibited low contrast.One of the assumptions made in this model is that the input kernel is largely inaccurate and thus the maximum a posteriori estimation (MAP) is applied.
Research in different deblurring methods is provided [8,9].Removing motion blur is a major problem of BD, and it is solved through linear and non-linear processing [10].Non-linear processing is a classical technique that uses the Local Radon Transform to estimate bur kernel and image restoration for a clearer and sharp image [11].BD takes into account the blur kernel in learning the deblurring of event-based images.Non-linear processing is used to recover images degraded by motion [11,12].The authors started with a formulation of sequential event-based motion deblurring, then demonstrated how end-to-end deep architecture can help in the optimization.The proposed nonlinear model utilized a recurrent neural network with knowledge of visual and temporal aspects of the image at the local and global scales.Experimental results showed that deep learning architecture is a novel approach to achieving a superior performance of image deblurring using real-world datasets.
BD involves weak assumptions about the filter, to address this problem, the approach focuses on image edges, a process called edge estimation [12].The estimation of the kernel is initialized by utilizing the edge information [13].The MAP and the Total Variation (TV) approaches provide techniques for kernel estimation.The MAP approach minimizes the problem to estimate the kernel, assuming that there is a fixed combination of ununified segmentation masks.The TV framework is also applied thanks to its superior ability to preserve edges.The TV approach makes it possible to achieve a deblurring algorithm faster and with better quality.The two frameworks are associated with the Maximum Likelihood method of estimation.Above all, the BD methods lack the ability to estimate preliminary data about the image scene.
Literature also shows the use of loss functions such as multi-scale frequency reconstruction (MSFR) and multi-scale charbonnier (MSC) loss, and multi-scale edge (MSED) loss functions (Jiang et al. [12] and Ezumi et al. [14].Ezumi et al. used the loss functions for blur removal due to raindrop.The authors incorporated a non-local operator (Global Context Network) to capture long-range dependencies.They also included loss functions applied in the training stage by adding models for recovering important components for high-frequency details [15].They also included a multiscale loss to train smaller-scale processing blocks.
As this suggests the BD approaches are suitable where there is no information about SPF.The deconvolution function helps to deblur and restore PSF and image concurrently.The method requires that initial parameters are assumed to initiate deblurring through the iterative phases.An example of a motion image deblurred using the method is shown in Figures 1 below.Jiang et al. [12] proposed a deep learning model to deblur images in Figure 1 by learning to recover details from the degraded image and the motion event.

Non-Blind Deblurring
As opposed to blind deblurring, most non-blind methods (NBD) suggested in the literature are designed to perform better when the motion is presumed as accurately known [2].Accordingly, NBD methods often do not perform in situations where blur estimates are noisy.The conventional NBD deblurring mode is represented by the model: where b is the blurred image and l is related to b through k (blur kernel).' * ' denote the convolution operation and n is the noise (Additive White Gaussian Noise).Most NBD approaches use MAP formulation where the kernel estimates determine the latent image through optimization.As such, NBD requires prior knowledge about the PSF for deconvolution to take place.Several NBD approaches exist as provided in the literature.

Image deblurring methods
Wiener Filter is one of the classical NBD techniques used to clear noise problems that characterize blur image deconvolution [1].Weiner Filter works attempt to minimize the impact of blur at frequencies with a low signal-to-noise ratio.The approach requires knowledge of PSF parameters.Image deblurring is possible where the frequency information and additive noise are certain.Without noise, the approach becomes an inverse filter.It deconvolution noise by minimizing mean square error existing between the desired and estimated random processes [2].Lucy-Richardson (LR) is another classical algorithm used in deblurring.Similar to Wiener Filter, the LC algorithm works in situations where PSF is identified but with no information for noise [4].This is a cyclical process focused on restoring images using the known blurring operator.One main challenge with this approach is the many times of repeating processes.As the number of iterative cycles increases, the computational process slows down and this may increase the noise.Both Wiener Filter and LR algorithms are often ineffective, and research has proposed new formations focused on developing more accurate image priors [4,5].Yet, advanced models often exhibit optimization problems that are difficult to eliminate, and this limits their practical use.In recent years, research has exploited convolutional neural networks and deep learning as more promising methods of image deblurring, and they have shown great potential.

Convolutional Neural Networks and Recurrent Neural Networks
Deblurring images using neural networks is a recent trend to address complexities associated with classical methods.Different Convolutional Neural Networks (CNNs) emerged since the advent of deep learning [16].The CNN approaches exist in different forms, in terms of components and structure.In 2014, for example, Schuler et al. [17] suggested a deep-layered architecture for image deblurring.Cheng et al. [18] used deep CNN to formulate the image prior and interpreted an effective deblurring approach anchored on a discriminative image prior.Nah et al. [16] developed a fully CNN that proved effective in directly estimating latent clear images.In doing this, the model utilized multi-scale residual networks.Other works adopted Nah and others' model to develop multi-structures with a capacity to aggregate image features in a sharper manner [18,19].The CNN-based deblurring methods provide more benefits in deblurring images in scenes with low latency.In doing this, the methods circumvent the repetitive optimization process characterizing traditional methods.
Another good example is the deblurring model designed using CNNs and Recurrent Neural Networks (RNNs).The design focused on learning spatially changing depths.Using multiple hierarchical networks with varying depths, Zhang et al [20] proposed a more robust model of image deblurring.It is clear that deep learning models have a better capability of enhancing deblurring performance.In addition to single-image deblurring, multi-image or video deblurring is an emerging process in the image deblurring field.Several recent works show efforts to deblur multiple images to restore videos to better sharpness and contrast.

Generative Adversarial Networks
Another recent trend is the use of Generative Adversarial Networks (GANs), a type of machine learning where two neural networks compete, with one losing to the other [21].GAN emerged in 2014 and has had massive success.What makes GANs interesting is that they can generate new content from a given dataset.A well-known illustration is the creation of results from training on two datasets (TFD and MNIST) (See Figure 2).The dataset in the rightmost column has true data that are closest to the adjacent generated samples.The result shows that the produced data sets show that GANs can produce outcomes based on learning as opposed to memorization.
With a given training dataset, GAN learns to produce new datasets resembling the training dataset.In image blurring, GAN can learn from images to generate new images that look almost authentic in the eyes of a human being.Although GANs started as regenerative models that would learn unsupervised, they have proved effective in learning when semi-supervised or under reinforced learning.The overall idea of GANs as conceived by Goodfellow and others is defining a context between the discriminator network and generator network [21].The samples generated challenge the discriminator to learn the differences between them.The generator serves as a 'fool' to the discriminator by generating samples that are not practically distinguishable from the actual samples.An example of the GAN Model Architecture is provided in Figure 3.

Graph Convolutional Networks
Another recent development is the advent of Graph convolutional networks (GCNs) that have demonstrated the capacity to deal with complex data points such as graphs and clouds [22].The GCNs have attracted significant attention in machine learning and image deblurring.Evidence of GNC application in image classification exists [23].Literature shows that GCN works better on encoded data due to the high-level semantics.Based on a comparison of network structures of data classification and image rebuilding, Xu, and Yin.[24] argued that there are semantic relationships in low-level properties such as intermediate feature maps in CNNs.An encoder-decoder network is proposed by Xu and Yin with additional graph convolutions [24].To do this, they converted feature maps to tips of a pre-created graph to unnaturally create graph-structured data.The application of graph regularization makes the data more structured.The outcome of this experiment showed that GCNs can enhance the image deblurring process by providing even higher resolution (See Figure 4).A comparison of the results in Figure 4 above shows that the proposed image restoration model outperforms previous methods in terms of resolution.Yet, this is the initial work on using GCN and it is expected that future works will build on it.

Novel Deep-learning Image Deblurring
Due to the poor performance of image deblurring networks in images with larger kernels, new designs of deblurring networks have emerged.Nah's et al. [25] proposal of Deepdeblur revealed that it is not possible to achieve quality outputs as the network runs for a long time, though relatively faster than classical methods.A plausible solution suggested is the use of parameter-sharing to minimize the algorithmic size.Yet, the computational challenge still exists and [25] attempted to solve it by designing a 'coarse-to-fine' structure through GAN, a multiple-scale model.Yet, knowledge is insufficient about the working of network components in deblurring.Also, the most relevant network for image deblurring is a question of further research.
Achieving high performance and efficiency in learning-based algorithms requires a comprehensive understanding of network backbones as they influence image processing outcomes.Given the direct influence of the efficiency of image deblurring, different backbones are adopted as provided in the literature, and this serves as the basis of the different methods used in learning-based processing.Some authors suggested and adopted the use of special blocks to enhance efficiency in the encoder/decoder feature map [23,24].Others use parameter sharing to ensure a low number of parameters are used for simplified training and thus better performance [26].Recent research shows that a simplified encoder/decoder structure can also generate better output minus parameter sharing [27].
Another basic determinant of the performance of learning-based networks is frameworks used in deblurring processes.The encoder-decoder structure proved to generate better performance [26].The use of encounter and decoder in multiscale structures such as DeepDeblur will subject the input image to several encode/decode processes, achieving a finer image restoration.However, a multiscale network is likely to slow the learning process, especially where parameter sharing lacks [25].To address this challenge, Shafiq and Gu [28] proposed a residual leaning structure (DnCNN) to improve the accuracy of avoiding degradation.Also proposed is a dense network connection (DenseNet) where each layer uses knowledge from the preceding layers (Figure 5) [29].The accurate segmentation of input images provides a significant impact on image accuracy, an important step in deblurring processes.

Summary, Challenges and Opportunities for Image Deblurring Methods
Machine learning and deep learning-based methods are the recent developments in the image restoration field.As opposed to classical methods, learning-based methods are associated with significant benefits in image deblurring.Evidence shows enhanced performance in the deblurring process.In most standard datasets, learning-based methods outpace traditional technologies by reducing the iterative cycles of deblurring processes [12].Besides, learning provides opportunities for more realistic restoration of single and multiple images.For example, it is now more possible to restore images and videos from degradation by filling in missing points in consecutive frames.Algorithms for deep learning expectedly fit in typical computing hardware, resulting in higher computational efficiency.As opposed to the conventional deblurring models, the neural networks enhanced the efficiency of graphical processing, suggesting an advancement in the iterative cycle of image restoration.
However, some challenges exist in the current image deblurring technology.While the recent trends provide better image deblurring techniques, they come with increased computation costs that complicate processing in real-time [30].Also, the high computational requirements come with high hardware requirements such as memory and graphical processing units.It becomes challenging to satisfy these requirements using embedded systems typically utilized in industry.This limitation suggests the need for simpler models to process image restoration.In terms of performance, existing techniques still have a downside in the processing of images with large blur kernels [16].This suggests the need for improving algorithmic performance.The architecture of learning-based deblurring techniques is improved based on rules and experience in computer tasks such as PSF detection and classification [24].This denotes the need for a profound understanding of the network elements and structure for image processing.
While CNNs have proved effective in image deblurring, adapting GCNs in CNNs provides new opportunities for convolutional neural networks.Although existing GCNs focus on image classification, the potential for graph structures in CNNs' feature maps is growing to enhance image deblurring.Experiments by Xu and Yin [25] showed that adapting GCN structures in CNNs can enhance the image deblurring performance, although this suggests an additional computational requirement.Indeed, examining topologic associations in the feature maps may enhance the image restoration processes.

Conclusion
This review shows that while image deblurring remains one of the most researched problems in the image processing realm, it is still a topic of further research.For the last two decades, some notable achievements include the application of deep neural network architectures and the adoption of graph structures in CNN feature maps to improve image output.Experiment shows that the adaptation of recent developments can improve image processing and restoration.However, several research opportunities exist to improve the quality of deblurring.First, Image deblurring for low-cost devices is a future direction.While deblurring motion images is achievable by today's deblurring technologies, algorithmic performance remains a major challenge for devices with low hardware and software capacity.Deep learning today is more effective when applied to large models, most of which cannot match the capacity of mobile devices [31].Some researchers such as Chiang et al. [32] sought to address computational complexity by proposing a portable architecture focused on achieving better quality-latency with a deeplearning accelerator, but some limitations still emerged.They concluded that there is a need to search for a portable network architecture taking into account the limitations of mobile devices, including quantization and pruning as well as hardware preferences and limitations.The research also revealed the need for a systematic search approach to portable network architecture like Network Architecture Search for improved portability [32].For high-quality deblurring of real-time motion images captured on mobile device cameras, model compression without affecting the performance of the algorithm is an important direction for future research.With the popular method of unsupervised learning, datasets play a critical role in learning for image deblurring.Developing methods of improving the quality of data would improve training models.Porav et al. [33] presented a system that deblurs images degraded by adherent raindrops.They used a dataset of images affected by actual raindrops.While the experiment generated quality image restoration, the role of quality dataset emerged as an important direction for future studies.Future research may design a method of creating computer-generated raindrop data that is not different from real raindrops in terms of their relevance in training models.This discovery would improve the quantitative performance of image-processing tasks.A research opportunity also exists in the development of a training model using a blurred image without depending on external training datasets.It is also possible to develop a function that can determine the extent of image blur to allow for reinforced learning.This discovery would allow deep-learning models to generate the sharpest image.Further, future research should focus on restoring images when low illumination or Gaussian blur causes degradation.It is possible to generate algorithms that can process multip

Figure 1 .
Figure 1.Comparison of Blurred Image and the generated sharp image.The results is from [11].