Sparsity Augmented Restoration of Underwater Images using Compressive Sensing

The study of underwater environments is a difficult task because both laws of reflection and refraction are influenced in the visualization of underwater medium. Other variables, such as aquatic flora and fauna, also have a major effect on the lighting conditions. While optical processing of such images is regarded, degradation can take place due to various environmental and capturing device defects. Also, due to over hitting of samples, the algorithm based on traditional Nyquist algorithm can prove computationally sensitive and produce lower accuracy. Compressive Sensing (CS) provides an alternative approach to overcome the over-hitting problem. The efficiency of underwater algorithms can be improved by incoherent sampling combined with sparsity. The proposed model is applied to the underwater blurred image and the algorithm is evaluated for restoration using directional gradient priors. The contribution of the proposed algorithm can be characterized as: a) Incoherent sampling reduces the problem of hitting (b) Enhancement of sparsity coefficients for restoration algorithms. The proposed contribution to the application of underwater imaging is mainly because there are less random samples in the compressive sensing that produce better sparsity prior to the initial stage, which makes the solution to the ill-posed problem highly convergent with high accuracy. Therefore, we propose a new Temporal Sparse Bayesian Learning (TSBL) using compressive sensing that leads to higher resolution image enhancement in underwater conditions.


Introduction
Compressive sensing is a new technique that recovers or restores degraded signal from incomplete information [1,2]. Traditionally, the Shannon-Nyquist sampling method [8] is used to reframe an image or signal from the obtained data. On regard of this theorem, sampling at a rate equal to the Nyquist frequency, that is, twice the bandwidth of the signal, allows the signal to be completely recovered from the sample. In some scenarios, when the broadband and digital imaging is desired, the Nyquist frequency is so high that there are too many samples to store or transmit. Most of the real time signals are sparse in nature and need to be processed before transmission for specific applications. In many instances, the communication link signals can be Fourier compressible, but cosine base and discrete wavelengths are generally suitable for natural image compression. In addition, the sparse distribution of the signal enables more possibilities in effective processing of signals.
Compressive Sensing [9] technique can be suitable for recovery of degraded image with less number of unknown parameters compared to known parameters. This class of problem defined as illposed problem don't often have a definite solution and CS can help achieving the solution by constraint convergent minimization. The sparsity in the underwater challenging scenes need more than one solution because of their multi-level degradations including geometrical distortions and color deteriorations.

A. Sparse Measurement Model (SMM)
The basic sparse recovery model of signals which can be defined as a single measurement vector (SMV) model, given by Here, y represent the degraded output, U represent the blur or point spread function (PSF) matrix which is user defined, and x represent input signal and v is unknown noise vector. The problem (1) is ill-posed in nature offering infinite solutions. A separate class of numerical techniques knows as optimization methods are normally used to solve these problems. Often in these solutions, the initialization parameters are sufficiently sparse and sparsity approximation using CS prove efficient for solving the above mentioned ill-posed problem.
The recovery of original signal by CS algorithm using the common dimension matrix U, given by ̂ || || The above mathematical representation gives the L−2 norm regularized least square optimization where λ represent the regularization parameter and (x) represent gradient feature. If the original signal x is appropriately sparse, several CS algorithm can precisely restore x from y without noise v or with better value in the presence of noise. The dictionary matrix also enables x to be sparsely i.e. represented as , where the dictionary coefficients z exhibit sparse property. The dictionary matrix usually formed through orthonormal DCT based transformation. When used in CS algorithm recovers the original signal as ̂ || || where . The above numerical optimization procedure is termed as original recovery and forms the basis for the proposed approximation. The SMM model typically works well with a single source or monochrome channel. But for multi-channel or temporal observations the basic SMM model is utilized to arrive at the Multiple Measurement Vector (MMV) model as (4) Where consists of measurement vectors, is the preferred solution matrix, is an unidentified noise. The X value is assumed as a column vector for the MMV model. Also, it is needed to apply similarity to the SMM model, even if the number of non-zero rows should be less than the threshold for making unique and global decisions. X has a finite limit, and multidimensional vectors can be used to significantly improve support recovery. Also inter and intra correlation among the non-zero rows if ignored, can deteriorate algorithm's performance. In our work we exploit the inter and intra correlation using auto regressive moving average (ARMA) algorithm which yields better results. The algorithm keeps the computational complexity less even with having spatio -temporal correlation (inter, intra) in to consideration. The proposed approach also uses directional regularization where the features are clustered according to the orientation. Adaptive optimization is carried out using different weights. The directional regularization emphasizes the relevant features while suppressing the non-relevant features. This effect called directional regularization gives usually smooth results by overcoming block and boundary artifacts as a result of intra and inter correlation.
The main contribution of the proposed TSBL scheme can be summarized as: (i)Use of directional regularization by measuring orientation of gradients (ii)Considering intra-inter correlation using Auto Regressive Models (AR) (iii)Using dictionary based learning with sparsity prior.

Related Works
Any data that includes image, signal and video can exactly be reconstructed from a set of evenly spread out samples taken named as Nyquist rate that is two times the highest frequency present in the signal. With consideration to all these findings, the signal processing has progressed from analog form to the digital form. Digitization paved the way for establishment of sensing and handling systems which is further robust are used in most of the analog systems. In many developing applications, the ensuing Nyquist rate is very high in order to achieve too many samples. To report such computational complexities in high-dimensional data, we frequently relay on compressing the signals to achieve the exact representation of any signal .With this idea a satisfactory distortion can be achieved. Common techniques for signal compression includes transform coding, subject to determine a frame which offers compressible illustration of any signal [1]. Sparse and compressible signals are together symbolized with high reliability by saving the values and points of the highest coefficients of the signal which is known as sparse approximation. This constitutes the basic transform coding that degrades the signal compressibility and sparse principles. This helps in design of new sensors and for emerging new signal acquisition systems [2,3].
To overcome this computational challenges and difficulties we worked on BSBL method. By using this algorithm for these models solve the subsequent sparse signal reconstruction and compressive sensing difficulties with greater performance to all prevailing algorithms: (1) restoration of block or cluster sparse signals with wedge partition (2) restoration of non-sparse signals of any configuration. This finds worthy in number of applications including monitoring of Wireless body area network in telecommunication and many pattern recognitions. But computational complexity is relatively high.

Proposed Methodology
The sparse representation model is another MMV model with multiple assumptions about X. It can be explained as follows: (5) Where Y€R MXL , U € R MXN and X € R NXL . In this model we are able to consider intra, inter correlation using autoregressive (AR) model. This article specifically considers the following specific structure for X.
[ ] For is the i th patch of ∑ Let us consider, is called the partition in blocks. Between the patches, merely few are non-zero blocks. Due to sparse the significant statement is for every patch is predictable to have correlation with adjacent patches. The entries in all column of are linked and entries in every row of are also linked together. Hence this model can be realized as a linear grouping of the canonical MMV. All patches are assumed to have Gaussian distribution parameterized as, } (7) Here represents unknown correlation structure which is positive definite in also captures each row of . is a positive scalar, defining for the patch is a null patch.
Considering the patchesas . are commonly independent, the matrix representation of X is Where δ is a matrix defined by Assuming the noise matrix V to be negligible and hence Temporal Sparse Bayesian Learning For any sparse input, the BSBL structure models each block of as a multivariate Gaussian parameter distribution.
(11) Where is a positive parameter that correlates the sparse nature of block x. If such as i block, then xi corresponds to null block is a positive deterministic matrix that modifies the link for the -block. Assuming the blocks are not connected to each other.
The threshold is used to block small while iterating through the algorithm. The smaller the threshold, the less is removed, so the number of blocks in x will be zero. So the estimate for x is less subtle. In the experiment, the threshold value is fixed as 0 and disable the cropping mechanism.
The BSBL structure's ability to restore signals rather than non-sparse representations has exciting scientific implications. Linear algebra has an infinite number of solutions to sub-deterministic problems. If the actual resolution of image is degraded, it can be detected using the CS algorithm. However, lack of a true solution makes finding new constraints/assumptions harder and requires constraints. This work shows that the solution is very close to the real solution through block structure and block correlation. These results not only open up new and exciting possibilities for image compression, but also raise the theoretical problem of having to reconstruct thin signals in a small number of dimensions.
Due to the link between and , rightly resembling the parameters using above could result in heavy computational load. To overcome this we use a spatially whitened model using a repetitive learning approach, in which the constraints and λ are calculated after the spatially whitened model, and the parameter B is valued as the temporally whitened model. The resulting algorithm uses alternative estimates of the two models until they converge. The alternate approximation method greatly simplifies the computational complexity.

B.
The Temporally Whitened Model To simplify algorithm progression, consider B as a known parameter ̃ ⁄ ̃ ⁄ ̃ ⁄ (12) The actual TSBL model is expressed as ̃ ̂ ̂ (13) Where elements in the columns ̃a nd ̃ are independent.Thus, the algorithm development becomes easier.

C.
The Spatially Whitened Mode In order to determine the matrix B, originally the model is expressed as follows ̅ ̅ (14) Here ̅ ⁄ and ̅ ⁄ . ̅ preserves similar form as , still its collectively has no intrablock correlation in patches because of the spatially whitening consequence from ⁄ .Therefore, it is slightly easy in estimating the B value using this model.

D.
Regularization Due to the number of unknown parameters being more than the amount of accessible data. The regularization of assessed and vital. Appropriate regularization benefits in overcoming negligible learning complications that follows-on from the ill-posed problem. The regularization of ̃i s given by ̃ ∑

Experimental Results
The experiments were carried out with compressive sensing on under water imagery. The patch is chosen so that each pixel in the image is scaled down from the center of the 7x7 window. The size of the image is selected as 262x262. Then slide the 7x7 window up and down in the image matrix to split the image. Image spot filters out Gaussian passes that are high by the 7th order to etch less active areas and preserves only the more active areas in the image. Fragments marked with high tokens are grouped using k-means clustering techniques, with each cluster being a midpoint. Then by using PCA, the covariance matrix is calculated for the corresponding eigenvalues and check if the eigenvalue is less than the threshold 4, and select it as the eigenvalue. Trials on each subset of the data collected allow us to evaluate each patch. The performance of the proposed sparsity based approach was compared with some of the existing methods given in the literature. These methods include Spatio-Temporal Sparsity Based Expectation Maximization (STSB-EM), basis pursuit to name a few. The proposed algorithm was also tested for critical sampling rate (CR) of 50, 60, 70, and 80 Hz. The proposed sparsity-based directional approximation was found to give better results than existing method in most of the cases. Also the proposed methodology will be tested on several underwater imaging datasets and rectified for turbidity, barrel distortion and color correction. The enhancement in the results for the given data proves the efficiency of the proposed method and it can be scaled to various data from cross-disciplinary research areas. The compression ratio is estimated as (16) Where N is the image size and M is the size of the reconstructed compressed output image. The sparse block recognition matrix has a size M × N, where N is in the range of up to 512 and CR is between 0.1 and 10.  Fig. 2 Deblurred image out come with uniform kernel parameter of σ n = 1.414 Table I: CR values for original data and recovered data for a frequency 4Hz Table II: CR values for original data and recovered data for a frequency 5Hz

Conclusion
The article focuses on developing an efficient, sensible approach namely Temporal Sparse Bayesian Learning (TSBL) schemes to overcome all these disadvantages caused due to environment and algorithm defects. Directional gradient becomes base for these. Here we compare existing methodology with the proposed methodology, modeling of underwater scenes, hence by improving and getting better results. Future work includes lossless compression for underwater image enhancement and transmission applications