Chapter 4

In layperson's terms, how does electrical impedance tomography work?


Published Copyright © IOP Publishing Ltd 2023
Pages 4-1 to 4-40

Download ePub chapter

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

Download complete PDF book, the ePub book or the Kindle book

Export citation and abstract

BibTeX RIS

Share this chapter

978-0-7503-5402-8

Abstract

This chapter explains the basics of electrical impedance tomography (EIT) for sensor experimentalists who are new to the field, focusing on piezoresistive sensing. It closely examines how EIT works starting from the data collection process and going through the inverse problem to produce an image, with reference to the computations made in the open-source EIDORS software.

This book is available under the terms of the IOP-Standard Books License

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher, or as expressly permitted by law or under terms agreed with the appropriate rights organization. Multiple copying is permitted in accordance with the terms of licences issued by the Copyright Licensing Agency, the Copyright Clearance Centre and other reproduction rights organizations.

Permission to make use of IOP Publishing content other than as set out above may be sought at permissions@ioppublishing.org.

Elisabeth Smela, Ayush Nankani and Carlos Cuellar have asserted their right to be identified as the authors of this work in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.

This chapter explains the basics of electrical impedance tomography (EIT) for sensor experimentalists who are new to the field, focusing on piezoresistive sensing. It closely examines how EIT works starting from the data collection process and going through the inverse problem to produce an image, with reference to the computations made in the open-source EIDORS software. Using figures and examples, key concepts are explained, including singular value decomposition (SVD), the prior, regularization, and Picard coefficients. The scaling relationships among current, conductivity, and the hyperparameter are presented. The implications for handling noise and image quality are then discussed.

4.1. Overview

Section 2.2.2 gave a high-level overview of the EIT process; figure 2.2 is repeated here as figure 4.1. This chapter goes in depth through each step of EIT imaging: data collection (section 4.2), representation of the data (section 4.3), parameters and mechanics of the inverse problem (sections 4.4 and 4.5), and practical ramifications for imaging (section 4.6).

Figure 4.1.

Figure 4.1. The EIT process (left to right) from the experiment to the output difference data, the physical system and model choices, algorithm, and final reconstruction.

Standard image High-resolution image

4.2. The measurements

4.2.1. Data collection

Figure 4.2(a) shows a typical tactile imaging measurement configuration at one snapshot in the sequence of measurements. A circular sensor is shown for illustration, although other shapes have been employed. The sensors can be considered two-dimensional. Usually 16 electrodes are employed [15] and roughly 200 measurements are made per image, systematically switching the positions of the injection, measurement, and ground electrodes. In EIT, the cost of the small number of electrodes placed solely at the perimeter is an increased complexity of the data acquisition electronics as well as of the subsequent signal processing. Figure 4.2(a) shows a bipolar or single current source injection system, also referred to as pair drive, which is the one most commonly employed in tactile imaging. It also shows a bipolar voltage measurement configuration. Software-controlled switches define the injection and measurement electrodes by changing connections throughout the sequence of measurements.

Figure 4.2.

Figure 4.2. (a) Schematic showing three commonly used current injection patterns: adjacent, pseudo-polar, and polar for a circular sensor. An adjacent measurement pattern is also indicated. The x symbols indicate pairs of electrodes between which voltages are not recorded (colors correspond to those of the injection pattern labels). (b) The calculated DC voltage drop between the injection electrodes, which is the maximum value seen anywhere in the sensor, as a function of the position of the current injection electrode, with ground at electrode 1. (Conditions: 2D sensor, constant I = 3 mA injection current, uniform baseline conductivity σ0 = 10 mS; see units discussion in section 3.1.4.) Simulated equipotential lines for (c) the adjacent and (d) the pseudo-polar injection patterns for a sensor of uniform baseline conductivity (the reference state). Note that the maximum values at the electrodes differ between the two patterns, as expected from (b). (The color bar indicates the voltages, from yellow at 0.9 V to blue at 0 V, with intervals of 0.07 V.) (e) Distorted equipotential lines for the pseudo-polar injection pattern when the sensor has a lower conductivity within the circular area indicated by the white line.

Standard image High-resolution image

To obtain one element of Vtouch or Vref, a current is injected at one electrode. It flows throughout the entire conductive sensor to ground, and voltages are measured between other electrode pairs. Various voltage measurement patterns have been used, but the dominant one has been the 'adjacent' pattern [5, 6] (also called neighboring), as shown in figure 4.2(a), in which voltage differences between neighboring electrodes are obtained. In practice the voltages are often recorded between each electrode and ground, and the differences between the electrodes are calculated in software.

Various electrode pairs can also be used to inject the current [1, 6]. In the adjacent injection pattern the current-injecting, or driving, electrodes are again immediate neighbors, such as electrodes 1 and 2, whereas in the 'polar' pattern (also called opposite) they are on opposite sides of the sensor (i.e. electrodes 1 and 9). Other, non-bipolar injection patterns, such as trigonometric [7], optimal (multiple current source) [8], and diagonal [9], have also been proposed, but the hardware complexity required for implementation [4, 10, 11], which may require multiple current sources, means that they are rarely used in practice [12, 13], so we do not treat them here.

To name the electrode pairs, we refer to the gaps or offsets between them, and the injection–measurement (I–M) pattern is named with the injection electrode pair followed by the measurement pair. Therefore the adjacent–adjacent I–M pattern is, for 16-electrode systems, called 1–1 and the polar-adjacent pattern is 8–1. Another commonly used pattern, pseudo-polar-adjacent, is 7–1. If polar injection is used with polar measurement, it is called 8–8. The latter configuration has a high symmetry since rotation by 90°, 180°, and 270° produces identical results for a uniform initial sensor conductivity. One disadvantage of this naming convention is that the names change if a different number of electrodes is used. For example, with 32 electrodes, 8–8 becomes 16–16. In this book, all the I–M pattern designations refer to a 16-electrode system unless otherwise noted.

For each pair of injection electrodes, voltages are measured between multiple other pairs. For adjacent current injection combined with adjacent measurement (1–1), when the injection pair is at positions 1, 2, the measurement pairs are first 3,4 then 4,5, etc, ending at 15,16. (In practice, the voltages may be measured simultaneously rather than sequentially [14].) Voltages are not measured (or they are recorded and discarded) between injecting electrodes and their immediate neighbors, as indicated in figure 4.2(a), to avoid contact resistance errors [12], so 13 voltage differences are obtained for the adjacent measurement pattern and 12 for most others, except for 8–8, for which 14 are collected. The injection electrodes are then rotated to the next position (e.g. for adjacent from 1.2 to 2.3, while for polar from 1.9 to 2.10), and voltages are once again obtained between the other electrodes. The process continues around the perimeter for a total of m measurements. For the 1–1 pattern this sequence leads to a total of m = 16 × 13 = 208 voltage measurements and for most other patterns, 192 measurements. This set of measurements is known as a 'frame'. Due to various symmetries, however, the number of independent measurements is only on the order of 100, further discussed below. The number of measurements in the frame increases approximately with the square of the number of electrodes: for 7–1, 32 electrodes requires 896 measurements.

4.2.2. Potential drops

In tactile imaging, constant DC current, rather than AC excitation, is typically used and the sensors can be considered two-dimensional. Figure 4.2(b) shows the voltage drop for a planar sensor between the injecting electrode and ground as a function of injecting electrode offset position; electrode 1 is kept at ground [15]. For a constant injection current, the voltage between the injecting electrodes is higher the further apart they are, since V = IR and R increases with distance. These simulation results agree with calculations of the resistance between points on a resistive circular area [16]. Since voltage differences between any other two points on the perimeter are fractions of this largest voltage difference, measured voltages will be highest overall for the polar injection pattern.

As mentioned above, to obtain an image of a conductivity change, such as due to a touch on a tactile sensor, the conductivity within the sensing area is compared to a reference image, which for tactile imaging is typically one obtained without a touch. Simulated equipotential lines are shown in figure 4.2(c) for I–M = 1–1 (adjacent–adjacent) and in figure 4.2(d) for 7–1 (pseudo-polar–adjacent) for a circular sensing area with uniform background conductivity. These are the reference images. Consistent with figure 4.2(b), the voltage drop between the current injecting electrodes is larger for pseudo-polar injection, as shown by the color scale. These figures also illustrate the greater voltage gradient (field strength) in the center of the sensor for pseudo-polar than for adjacent injection [17], since in the latter the current flows primarily at the periphery of the sensor. Figure 4.2(e) shows the equipotential lines for pseudo-polar injection for a sensor with a circular region of lower conductivity (higher resistance) than the background, indicated by the white line, which could be the result of a touch. The equipotential lines are distorted from those in figure 4.2(d) by the change in conductivity Δσ: more voltage is now dropped over that region of locally higher resistance and there are thus small changes in the perimeter voltages. EIT algorithms 'reconstruct' the conductivity changes that led to the voltage changes. Figure 4.2 is key for understanding why some injection patterns have been preferred in the literature: the further apart the injection electrodes, the greater the voltage drops between all other electrodes, and the larger the field gradients at the location of the perturbation, the larger the relative voltage changes.

4.2.3. Voltage data

Figure 4.3(a) shows the 208 voltages in the arrays Vref and Vtouch for simulations of the I–M = 1–1 pattern without and with a conductivity change near the perimeter, respectively, from a sensor with 16 electrodes. The voltages in the reference state are high when the measurement electrode is near the injection electrode, which occurs cyclically as the measurement pair position performs its rotation for each injection position in the frame. (For example, referring to figures 4.2(a) and (c), the voltage difference between the pair of measurement electrodes is highest when the injection electrode pair is 1,2 and measurement electrode pair is 3,4 or 15,16 and again when the injection electrode pair is 2,3 and measurement electrode pair is 4,5 or 16,1.) For 1–1, there are 13 points in each voltage measurement sub-cycle. The voltage difference decreases with distance from the injection pair, resulting in a series of 16 U-shaped features. (If the injection electrode pair is 1,2 the voltage differences between adjacent electrodes is smallest when the measurement electrode pair is 9,10.) For I–M = 7–1 and 7–7, on the other hand, there are regions where the voltage differences are negative (figure 4.4), because, referring to figure 4.2(d), the orientation of the measurement pair with respect to ground switches in half of the sub-cycle.

Figure 4.3.

Figure 4.3. (a) For a 16-electrode system, boundary potential measurement frames Vtouch and Vref resulting from the 1–1 I–M pattern with (red) and without (black) a touch near the perimeter (at x = 0.7, y = 0.0, circular target, radius of conductivity change rT = 0.2). (b) The difference Δ = VtouchVref associated with (a). (c) For I–M = 7–1, Δ for a target at x = 0.7 (blue) and for a target at x = 0.0 (green). The vertical scales in these plots differ. (Simulation conditions: I = 3 × 10−3, σ0 = 10−2 Δσ = −4 × 10−3; no noise; see units discussion in section 3.1.4.)

Standard image High-resolution image
Figure 4.4.

Figure 4.4. Using the same simulation conditions as in figure 4.3, Vtouch (red) and Vref (black) resulting from the (a) 7–1 and (b) 7–7 I–M pattern for a circular target at x = 0.7. Note differences in vertical scales of these plots.

Standard image High-resolution image

In figure 4.3(b) Vref is subtracted from Vtouch, giving the difference array, Δ = VtouchVref, which is the information used by the EIT algorithms (figure 4.1). (The values in Vtouch and Vref are voltage differences between pairs of measurement electrodes, while the values in Δ are differences in those same pairs at different times.) Note that in some places the differences are positive while in others they are negative; all contain information about the touch and contribute to the EIT image. The m values Δi in Δ are small, with many of them near zero (note the difference in y-axis scales between figures 4.3(a) and (b)). In this simulation the target was near the perimeter, closer to some of the electrodes, so the differences Δi are largest there. The Δi increase proportionally with the size of the stimulus, i.e. the strength of the touch.

Figure 4.3(c) shows Δ for the 7–1 pattern under identical conditions. The Δi are larger than in figure 4.3(b) (again note the difference in y-axis scales), as expected from figure 4.2(e). Also note that, as expected from figures 4.2(c) and (d), the maxima are more uniform across the vector instead of more focused at a few positions. Figure 4.3(c) also illustrates that Δ changes with the location of the target. The Δi for a target at x = 0 were smaller and showed periodicity.

The simulated data in figure 4.3 were noise-free. Experimentally, noise may be contributed by both Vtouch and Vref, and if these are in turn obtained from the difference of two measurements between an electrode and ground, then there are noise contributions in both of those measurements. In practice each voltage is typically obtained by averaging multiple readings on each electrode to reduce noise, although this increases the data collection time. Often Vref is obtained by averaging a large number of frames, making it essentially noise-free. The standard deviation of the noise may be 1–300 μV [18]. At minimum the noise between an electrode and ground is at the level of the least significant bit of the analog-to-digital converter (ADC).

4.3. The forward problem, using the sensitivity matrix approach

Given a known conductivity distribution in a region Ω (the sensor area), one can obtain the potentials at the perimeter; this is called the 'forward' problem [19]. For DC current injection, typically used in tactile imaging, to simulate the experimental situation the continuum version of Ohm's law is used,

Equation (4.1)

where j is the current density, σ the conductivity, and E the electric field, together with the conservation of current [20] and the continuum version of Kirchhoff's law,

Equation (4.2)

where ϕ is the electric potential. Equation (4.2) is known as the Laplace equation [60]. (At DC or low frequency AC, magnetic fields do not need to be considered.)

4.3.1. The Jacobian or sensitivity matrix

If the sensor area Ω is divided into spatial sub-regions Ωk (figure 4.5(a)), the voltage Vi between the measurement electrodes is the sum of contributions from all the Ωk. . The contribution to a change in Vi due to a change in the conductivity Δσk of Ωk depends on the amount of current flowing through Ωk , the distance of Ωk to that electrode, and the field lines ( E = −∇ϕ) at Ωk associated with the injection and measurement electrodes [19]. Specifically, the sensitivity to Δσk , or the amount that each element Ωk contributes to the measured signal [17], is highest where the injection and measurement fields are parallel: it depends on their dot product, where the field due to the measurement electrodes is taken as if they had been used for driving [2123]. The value of the dot product is the same upon reversing the injecting and measuring electrodes by the reciprocity theorem or lead theorem [23]. Expressing the above mathematically:

Equation (4.3)

where $\nabla \,{\varphi }_{i,k}$ is the potential gradient that would arise at element k if current were injected into the voltage-measuring electrode pair and $\nabla \,{\varphi }_{I,k}$ is the potential gradient arising from the current-injecting electrode pair.

Figure 4.5.

Figure 4.5. (a) The contribution of each mesh element Ωk to the voltage difference Vi between a pair of electrodes depends on the dot product of the injection and measurement fields there, the current flowing through it, and its distance from the electrodes. In this figure, the 7–1 configuration is shown for illustration. Two mesh elements at different locations 1 and 2 are indicated in green as examples. Vi indicates a particular combination of driving and measuring electrodes. (b) Example elements of the Jacobian, or the sensitivity matrix for an injection-measurement pattern having 192 measurements in the frame. Rows correspond to the injection-measurement positions in the measurement frame. Each row of J is an image showing the sensitivity at all locations on the mesh for that measurement.

Standard image High-resolution image

In finite element modeling (FEM) the regions Ωk are mesh elements. The matrix J of terms

Equation (4.4)

which depends on the sensor conductivity σ0, is called the Jacobian or sensitivity matrix (figure 4.5(b)), where i goes from 1 to the m, the length of Δ (the frame size), and k goes from 1 to n, the number of mesh elements. In general the Jacobian of a matrix F is a matrix whose (i,k)th entry is $\partial {F}_{i}/\partial {{\rm{x}}}_{k}$. In EIT, the Jacobian relates changes in voltages at each electrode to changes in conductivity at each mesh element for every drive-measurement pair in the particular I–M pattern. This transformation matrix is a linearized mapping between the boundary voltages and the internal conductivity, assuming small variations for which the derivative (slope) is sufficient to describe the dependence. (J is m × n. For the I–M pattern 7–1 in a 16-electrode system, m = 16 × 12 = 192. Using the EIDORS 2-D mesh model j2d4c, the number of mesh elements n = 20,230; so J is 192 × 20,230.) The linear approximation can be used with voltage difference data because the changes in voltages are small. On the other hand, for a 'static' image reconstruction of a medium's absolute conductivity from a single set of data, the reconstruction problem is highly nonlinear [24].

We can write

Equation (4.5)

where σ is a vector of the conductivity changes Δσk of the mesh elements and Δ is the vector of voltage differences (figures 4.3(b) and (c)). The vector σ encodes an image. In EIT forward problem simulations, σ is the difference between the baseline conductivity reference image and the image with the target (figure 2.3). The assumption in equation (4.4) should be emphasized: inverse solution methods often assume that the conductivity changes Δσk in σ are small in order to linearize the problem. However, in tactile imaging this assumption may easily be violated. For example, near the edge of the sensor large-area or multiple high-contrast contacts result in incorrect reconstructions.

The Jacobian is found from the reference baseline conductivity and from the I–M pattern: it comes from the reference forward model (e.g. figures 4.2(c) and (d)). It contains no information about the target or the noise.

The sensitivity of individual injection and measurement electrode combinations in the data frame to different locations on the sensor can be visualized [61, 62], as shown in figure 4.6; each row of J is one such image (normalized by mesh element size). The first seven rows (i.e. J1,: – J7,:, figure 4.5) out of the total m are shown for the adjacent (1–1) and pseudo-polar (7–1) patterns, with red indicating regions of high positive sensitivity (positive slope in equation (4.4)) and blue high negative sensitivity. (The other rows can be inferred by symmetry.) The adjacent pattern has high sensitivity at the perimeter, highest when the injection and measurement electrode pairs are close together (e.g. J1,:) as discussed above, and low sensitivity at the center, whereas the pseudo-polar pattern sensitivity is just as high or higher at the perimeter but extends into the center for some combinations, particularly J1,: and J7,:. (Other patterns are shown in appendix C.) Note that many values of the Jacobian are close to zero (represented as white in the images), which poses a challenge for the inverse problem, as discussed below.

Figure 4.6.

Figure 4.6. Visualization of some of the rows of the Jacobian J, or sensitivity matrix, for two patterns, I–M = 1–1 (top) and 7–7 (bottom), for a 16-electrode system and a circular sensor with uniform conductivity. The images correspond to the first current injection position and the first seven voltage measurement positions; the rest of the measurement positions are mirror images and the rest of the injection positions are rotations of the first. Red indicates high positive sensitivity, blue high negative sensitivity, and white no sensitivity. (The same color scale was used for all 14 images.)

Standard image High-resolution image

4.3.2. Singular value decomposition

To go from voltages at the perimeter to a conductivity distribution over the region, i.e. to take the measured data and create a conductivity image, is called the inverse problem, discussed in section 4.4. This involves inverting the Jacobian to find σ = J−1 Δ.

One way this inversion can be done, and which allows valuable insight into the reconstructions, is by factoring J into three matrices by performing a singular value decomposition (SVD) [20, 25, 26]:

Equation (4.6)

where S is a diagonal matrix of so-called singular values si = Sii , ordered from highest to lowest (see section 4.3.4). The columns of U, uj, and the columns of V, ${\mathbf{v}}_{{\rm{k}}}$, are the left and the right singular vectors of J. The superscript T indicates the transpose, since the values are real. (For a lucid discussion of SVD in the context of image processing and 'eigenfaces' see [27], and for SVD discussed theoretically see [28].) The number of nonzero values in S, and also the number of nonzero columns in V, is given by the rank r of J, or the number of linearly independent columns. By the reciprocity theorem and because of various symmetries, depending on I–M, r is less than m (the matrices are rank deficient). (For I–M = 7–1, r = 104 while for I–M = 8–8, r = 28.)

What SVD does is re-express J, the linearized relationship between voltage changes and conductivity changes, in terms of two orthonormal bases: the uj are the basis vectors for the voltage measurements for a particular I–M pattern ('eigen-Δ', so to speak, discussed in section 4.3.5) and the ${\mathbf{v}}_{{\rm{k}}}$ are the basis vectors, or eigen-images, for the conductivity changes (section 4.3.3). S, like J, is size m × n, U is m × m, and V is n × n. (For I–M = 7–1, S is 192 × 20,230 with 104 nonzero diagonal values and all other entries zero, or, due to numerical rounding errors, near zero. Note: for 7–1 the command svd in Matlab returns U as 192 × 192, but it returns S as a sparse 192 × 1 matrix and V as only 20,230 × 192.) Because the uj and ${\mathbf{v}}_{{\rm{k}}}$ are orthonormal bases we can write:

Equation (4.7)

where each of the r terms si ui ${\mathbf{v}}_{i}$ T is also m × n. (For 7–1, each si ui ${\mathbf{v}}_{i}$ T consists of 104 image vectors.) From here forward we use the same subscript i for all three components to highlight that they are matched in this way.

U and V are spectral bases [29]. Just as a Fourier expansion of sines and cosines can be used to represent any voltage signal, sums of ui can represent any measurement signal and sums of ${\mathbf{v}}_{i}$ any spatial conductivity change, and the singular values si are analogous to the coefficients of a Fourier expansion and show the overall importance of the ith component to J, in descending order. Recapitulating, the u i form the basis for Δ (the perimeter voltage frames can be created by combining these orthogonal vectors) and the ${\mathbf{v}}_{i}$ form the basis for σ (the conductivity image can be created from these orthogonal basis functions). The terms with the largest si are those that contribute most of the information to the conductivity image for a given I–M pattern.

Matrix inversion is an important application of SVD. The inverse (or pseudo-inverse or Moore–Penrose generalized inverse or minimum 2-norm least squares inverse) is found by:

Equation (4.8)

where S−1 is formed simply by taking the reciprocals of the nonzero diagonal elements, 1/si , and transposing.

If the sensitivity matrix has values near zero, for example because little current passes through an element Ωk , as is the case for the 1–1 pattern near the center of the sensor (figure 4.6), then the inversion will be unstable because of division by such small values [19]. This instability makes the inverse problem 'ill-posed'.

4.3.3. The image modes or eigen-images, vi

The eigen-images resemble standing waves or mode shapes of the head of a drum (figure 4.7). As explained in [30], these can be visualized by treating the vectors as image data in EIDORS. Figure 4.7(a) shows, for a 16-electrode system, examples of ${\mathbf{v}}_{i}$ that have circular symmetry with the index i increasing left to right; note how the spatial frequency increases with i. Note that the amplitudes of the ${\mathbf{v}}_{i}$ do not depend on current or sensor conductivity—that information is contained in the singular values, not the basis functions. Figure 4.7(b) shows examples of ${\mathbf{v}}_{i}$ with four lobes, two positive and two negative, and figure 4.7(c) illustrates a peripheral pattern as the index and angular frequencies increase. The first eight ${\mathbf{v}}_{i}$ (i = 1–8) for I–M = 1–1 and 7–1 are shown in figures 4.7(d) and (e), respectively. (All the ${\mathbf{v}}_{i}$ for these two I–M are shown appendix B). By comparing figures 4.7(d) and (e), it can be seen that the relative importance of the particular ${\mathbf{v}}_{i}$ differ for 1–1 and 7–1, as indicated by their order, and that while 1–1 and 7–1 share some of the mode shapes, some that are found in one are not found in the other.

Figure 4.7.

Figure 4.7. Examples of the ${\mathbf{v}}_{i}$, or eigen-images or modes, for a 16-electrode system. Red colors indicate positive amplitude, blue negative, and white zero; the same color scale was used for all the images. With increasing i from left to right, ${\mathbf{v}}_{i}$ with (a) circular symmetry and (b) four lobe symmetry. (c) Examples of ${\mathbf{v}}_{i}$ having a single peripheral ring of lobes, increasing in spatial frequency. The first eight ${\mathbf{v}}_{i}$ for two I–M patterns, (d) 1–1 and (e) 7–1.

Standard image High-resolution image

To obtain an image with fine resolution, a large number of ${\mathbf{v}}_{i}$ is needed, just as in a Fourier expansion a large number of terms is needed to recreate the corner of a step function from sine waves. (For an illustration of how low-frequency modes form the main features of the target and high-frequency components reveal its details, see [21].) To separate adjacent targets (figure 2.1) or to recreate pointed features, a full, or nearly full, complement of ${\mathbf{v}}_{i}$ is required. Importantly, a large number of ${\mathbf{v}}_{i}$ is also needed to image features at the center of the sensor [25, 31]: note in figure 4.7 how the higher i modes have fine structure at the image center, whereas the lower i modes do not.

Both 1–1 and 7–1 have a total of 104 ${\mathbf{v}}_{i}$ (the rank of J) in 16-electrode systems; for eight electrodes the equivalent patterns 1–1 and 3–1 have only 20, and for 32 electrodes the equivalent patterns 1–1 and 15–1 have 430. Other I–M patterns have fewer ${\mathbf{v}}_{i}$ due to symmetries. For example 8–8 has the highest symmetry in the 16-electrode system and has only 28 modes. Thus, 1–1 confers greater resolution than 8–8 under low noise conditions.

Combinations of the ${\mathbf{v}}_{i}$ can produce images that little resemble the individual ${\mathbf{v}}_{i}$, as shown in the reconstructions of figure 2.1. The process of combining eigen-images is illustrated in figure 4.8 for a triangular target. The top two rows show the ${\mathbf{v}}_{i}$ that make the most significant contributions for this target, up to i = 28. The images in the bottom row show the weighted running sums as the ${\mathbf{v}}_{i}$ are added, moving left to right, with the final EIT image on the right. Note that the weight for ${\mathbf{v}}_{1}$ is negative, so height is added to the center of the sensor and subtracted from the perimeter; the 'energy' is focused in the center of the sensor. Weights are further discussed in section 4.6.2. Adding the next two ${\mathbf{v}}_{i}$ with significant weights, ${\mathbf{v}}_{5}$ and ${\mathbf{v}}_{6}$, adds height to the right side, and so forth.

Figure 4.8.

Figure 4.8. Example of how an EIT image is created from a sum of ${\mathbf{v}}_{i}$ for a triangular target at x = 0.5. (a) The ${\mathbf{v}}_{i}$ that contribute most significantly to this target, up to i = 28. Yellow numbers indicate the index i. (b) The weighted sums as the ${\mathbf{v}}_{i}$ above each image in the bottom row are added. (Same color scales.) The final image is on the lower right.

Standard image High-resolution image

4.3.4. The singular values, si

A plot of the singular values si for 7–1 and 8–8 are shown in figure 4.9. The si are usually plotted on a log scale (the y-axis on the right). (Sometimes plots of si /s1 are used instead.) Note the large range of values. The ratio of the largest to the smallest si is the condition number, and the large ratio makes S ill-conditioned for inversion, as will be discussed below. Also note the sudden fall-off by many decades for 7–1 above i = 104 and the fall-off for 8–8 above i = 28, which are the ranks of S for these two patterns. The tiny values for i > r are not zero due to numerical rounding errors in the FEM.

Figure 4.9.

Figure 4.9. Singular values si from I–M = 7–1 and 8–8 shown on both linear (left, gray and black lines) and log scales (right, color points on lines) up to index i = 120 (16 electrodes, I = 1 × 10−3, σ0 = 0.01).

Standard image High-resolution image

The singular values depend on the I–M pattern and number of electrodes, and they scale with the current and conductivity, but they do not depend on target or on noise characteristics since J is based on the reference conductivity distribution. As discussed further below, the singular values are critical to understanding the effects of noise and regularization. Signal and image components (which are linked) with smaller singular values are more affected by noise and are preferentially removed by regularization.

4.3.5. The voltage difference modes or eigen-Δs, ui

As discussed above, the ui form an orthonormal basis for Δ: sums of the modes ui can create any voltage difference vector Δ (figures 4.3(b) and (c)), just as the eigen-images are summed to form the final EIT image (figure 4.8). A plot of u1, u2, and u5 for 1–1 and 7–1 are plotted in figure 4.10, illustrating that the ui also differ for various I–M patterns. For example, u1 for 7–1 is not found anywhere in the set of modes for 1–1. The amplitudes of the ui do not depend on current or sensor conductivity.

Figure 4.10.

Figure 4.10. The first two and the fifth ui for (left) 1–1 and (right) 7–1 (16 electrodes).

Standard image High-resolution image

Examples of the component matrices contributing to J are included in appendix D.1. The first 10 entries of s5 u5 ${\mathbf{v}}_{5}$ for I–M = 7–1 are shown.

4.4. The inverse problem for reconstruction

As mentioned above, the goal of EIT is to invert J to reconstruct the conductivity changes over the sensor area Ω from the measured voltage differences Δ measured at the perimeter. The inverse problem can be stated as:

Equation (4.9)

However, we are looking for more detail in σ than we have meaningful information from Δ [22]. The number of unknowns (the number of mesh elements in σ) is much larger than the number of independent equations (given by the rank of J), and because of the large condition number of S, this problem is ill-conditioned and thus ill-posed [19, 32]. (Note: these two terms are often used interchangeably in the literature. The problem is also called under-determined.) This means that small changes in Δ due to noise can result in large changes in σ [33]. Because of noise, the problem to be inverted is actually this one [30]:

Equation (4.10)

where n is the noise on each measurement and is assumed to be additive. Since the inverse problem is ill-posed, the solution in the presence of noise is not unique, and it requires so-called regularization to obtain an accurate approximate reconstruction. (Alternatively stated, noise is amplified, resulting in meaningless solutions, as shown in section 6.1.3.3.2.) The regularized equation that is solved is:

Equation (4.11)

4.4.1. Regularization

Regularization approaches were reviewed in [22], and the advantages of various methods have receive much consideration [3439]. We focus on Tikhonov regularization, a so-called penalty method, and in particular on the approach in [40]. Numerous prior treatments have already provided mathematical justifications and derivations, so the following discussion focuses first on the general idea in plain English, and then on the implementation mechanics.

Regularization essentially entails using known information about the physical reality of the conductivity and how it changes as a constraint on the solution. It involves balancing the measured data with that prior information (figure 4.11), employing a hyperparameter λ to act as a dial that controls the amount of regularization. The raw data may contain a lot of high frequency components, while the assumption about the physical reality may be that the conductivity changes are smooth, with conductivities not being greatly different in immediately neighboring elements Ωk . Also, it can be assumed that the conductivity of the medium is everywhere real, positive, and nonzero [33]. The baseline conductivity is typically known.

Figure 4.11.

Figure 4.11. The concept of regularization. The difference between an image solution that is a good fit to the noisy measured data is balanced with an image that is a good fit to a sensor model with known information and assumptions about the conductivity. For tactile imaging the baseline conductivity is a uniform σ0.

Standard image High-resolution image

Regularization penalizes particular image features, as determined by a 'prior', discussed below, to eliminate destabilizing large variations and make the inverse problem better posed [20]. If high spatial frequency components are penalized, for example by down-weighting components of the solution for which si < λ [21, 41], the image is smoothed. Of course, the greater the smoothing, the lower the resolution. On the other hand, if λ is too small the inversion becomes numerically unstable or the image becomes dominated by noise.

The terms L1 and L2 norm are frequently encountered in discussions of penalizing high frequency contributions. In regularization based on the L1 norm, distances are calculated as the sum of the absolute values of terms ∣∣x∣∣ = $\sum | {x}_{i}| $. These approaches are also known as total variation [22]. Regularization using an L2 norm (also known as a Euclidian norm), as is used in Tikhonov regularization, calculates distances instead as Euclidean lengths ∣∣x∣∣ = $\sqrt{\sum {x}_{i}^{2}}$.

4.4.2. Algorithms for the inverse problem, Gauss–Newton one-step method

Much has been written about algorithms that can be used for the inverse problem (see for example [36, 4244]). They have been grouped in various ways (use of L1 vs L2 norm, projection methods versus penalty methods, etc), but for tactile imaging the most important distinction is whether they are one-step or iterative. Higher accuracy and finer resolution can be achieved using iterative methods, but they take more time and computational power, and speed is essential for tactile imaging [4, 45]. Iterative methods are required for using the L1 norm [46]. Importantly, one-step methods can pre-calculate the regularized inverse of the sensitivity matrix [23, 47], so they require one matrix multiplication instead of matrix inversions [40]. In order to provide real-time imaging that allows immediate responses to tactile cues, we therefore limit the discussion here to the Gauss–Newton (GN) one-step method [35, 40], which uses an L2 norm, because of its speed [37, 48].

In the GN one-step inversion, the regularized inverse, equation (4.11), is obtained by finding ${{\bf{J}}}_{{\bf{reg}}}^{-1}$ this way:

Equation (4.12)

(For images based on element nodes, this step is performed in the Matlab file nodal_solve.m, and for elements, in inv_solve_diff_GN_one_step.m.) Before examining the components of equation (4.12), it should be noted that it is an example of generalized Tikhonov regularization, which solves the problem of an ill-posed inverse, in this case the near-singular matrix JT J (meaning that the ratio of the largest to smallest singular values, s1/sr is large) by adding positive elements, given by the regularization term λ2 RT R. Equation (4.12) is further discussed in appendix D.3.

In equation (4.12), W is a weight matrix (calculated in calc_meas_icov.m), which is the identity matrix (m × m) by default if no measurement covariances are specified. For uncorrelated noise, W is a diagonal matrix with the noise variance at each measurement [46].

The important R is the regularization matrix, and RT R (n × n) (obtained in calc_RtR_prior.m) acts as a 'filter', chosen to give preference to solutions that are considered to be desirable. For example, with the Tikhonov prior, RT R = I. (If W = I and RT R = I, then the method is called zeroth order Tikhonov regularization; for difference imaging assuming identical channels W = I [35]). Note that the Tikhonov prior should not be confused with Tikhonov regularization, with which it is used. The NOSER prior instead scales the identity matrix by the sensitivity [46] (table 4.1, table D.1) (specifically, the diagonal elements of RT R are (JT J)p where p = 0 to 1, typically 0.5 [35]). The EIDORS default is the Laplace prior (implemented in prior_laplace.m), which is a second order high pass filter (defined for a general mesh as a factor of 3 for the element and −1 for each adjacent element); it adds matrix elements not only to the diagonal but also to places off the diagonal. The Laplace prior was compared to others in [39] and found to better preserve feature edges, while the performance of various priors with the GN algorithm was discussed in [37]. It is important to specify the prior if this type of approach to finding the inverse is used: the effect of different priors on EIT images is shown below, in section 4.5. RT R is unchanged with forward model parameters such as current, conductivity, and I–M pattern.

Table 4.1.  The first five row and column entries of RT R for two priors. (2D circular sensor, constant conductivity).

NOSER 1 2 3 4 5
1 0.0068 0 0 0 0
2 0 0.0068 0 0 0
3 0 0 0.0105 0 0
4 0 0 0 0.0100 0
5 0 0 0 0 0.0010
Laplace 1 2 3 4 5
1 6 0 −2 0 0
2 0 6 −2 0 0
3 −2 −2 6 −2 0
4 0 0 −2 6 0
5 0 0 0 0 4

In equation (4.12), λ2 RT R begins to noticeably contribute to the sum in the denominator, and thus to affect the image, when it is greater than JT WJ (= JT J, n × n, when W = I), which does depend on forward model parameters. Examples of matrix values are shown in appendix D.2 for a relatively large hyperparameter, i.e. λ2 RT R > JT WJ(1,1), showing how this term affects the denominator.

In the truncated SVD (TSVD) method of approximating the inverse, the ill-conditioned problem σ = J−1 Δ is handled differently. J can be approximated by using only terms with the highest singular values. Instead of using the full expansion of J into r terms of si ui ${\mathbf{v}}_{i}$ T as in equation (4.7), in TSVD the expansion is truncated at j terms, where j < r [41]. This truncation removes small-value diagonal terms in the matrix, corresponding to high frequency components.

4.4.3. The effect of the hyperparameter on images

The optimal choice of hyperparameter has received much attention, as reviewed in [49, 50]. The effects of the hyperparameter are covered in depth in chapter 5, but are introduced here.

Figure 4.12 shows example reconstructions for a ring-shaped target at different positions between x = 0 and 0.8 for increasing λ. Greater regularization results in greater smoothing, while features broaden and fall, or 'defocus'. By λ = 1, almost all information about the contact has been lost. For a small hyperparameter, amplitudes and resolutions do not change appreciably with the target's position (figure 4.12(a)). For larger λ, however, the reconstructed feature does depend on position, being more 'defocused', at the center of the sensor (figure 4.12(c)). The images thus misrepresent the relative sizes and strengths of targets. This spatial dependence is detrimental for interpreting tactile contacts, since they are represented very differently at different sensor locations. Furthermore, when imaging multiple contacts, a touch at the center can be masked, or hidden, by touches at the edges. In addition, non-uniformity leads to distorted feature shapes and position errors. Spatial non-uniformity increases with λ. Since sensitivity to changes in conductivity, $\partial {V}_{i}/\partial {\sigma }_{k}={J}_{i,k}$ are smaller at the center of the sensor than at the edge, the JT WJ terms are smaller there, and the prior information λ2 RT R makes a relatively larger contribution to the sum. Thus, to retain image information, the smallest possible λ must be used, and this, in turn, necessitates the smallest levels of noise.

Figure 4.12.

Figure 4.12. Effect of the hyperparameter λ on noise-free reconstructions of a ring-shaped target of outer radius rT.o = 0.4 and inner radius rT.i = 0.2 at different x-positions as λ increases from top to bottom: (a) λ = 10−6, (b) λ = 10−4, (c) λ = 10−2, and (d) λ = 100 (16 electrodes, I–M = 7–1, I = 1 × 10−3, σ0 = 0.01, σT = 0.9σ0, N = 10−12, λs = 10−7 to 10−1).

Standard image High-resolution image

It should also be noted in figure 4.12 that at small hyperparameters, most pronouncedly at the periphery of the sensor, the ring is not of uniform amplitude. Rather, there are several points at which the amplitude is higher, shown by the darker red. This variation arises because the image is created from a sum of eigen-functions, as shown in figure 4.7.

The sum of eigen-functions is clearly apparent when looking at cross-sections along y = 0, through the center of the target. Figure 4.13 shows cross-sections for a range of target sizes rT , at two target positions and two hyperparameters. As the target radius increases beyond rT = 0.2, it becomes too wide to represent with a single peak; the reconstruction then employs by two peaks along x; the center of the feature becomes a local minimum. The number of peaks increases stepwise with rT . Below rT = 0.1, the reconstructed peak decreases in amplitude but not significantly in width, since the eigen-functions have a fixed width. (Simplifying the display to show only values above a threshold eliminates the rippled representation, but this may hinder distinguishability of closely positioned touches.) Another thing to note is that the width of the eigen-functions increases with λ.

Figure 4.13.

Figure 4.13. Cross-sections taken along the x-axis of reconstructed EIT images of a circular target with different radii for the ADJ–ADJ pattern (I–M = 1–1). (a) At the smallest hyperparameter λ = 4 × 10−8 (λs = 1.3 × 10−9) a target at the center of the sensor, x = 0. Black lines show rT = 0.01 and rT at multiples of 0.1, the red line shows rT = 0.2, and gray lines show intermediate target radii, while dashed lines show problematic reconstructions. (b) The same λ as (a) with the target at x = 0.5. (c) The same x = 0 as (a) but with a larger hyperparameter, λ = 1 × 10−2 (λs = 3.3 × 10−4). (d) The same λ = 1 × 10−2 as (c) but with the target at x = 0.5. (16 electrodes, I = 3 × 10−3, σ0 = 10−2, σT = 0.6σ0, noise-free)

Standard image High-resolution image

When the target is off center (figures 4.13(b) and (d)) the reconstructed feature becomes increasingly asymmetric, since the effects of the hyperparameter are spatially varying. Beyond a certain radius (in this example, at rT = 0.6) the reconstruction can fail because the assumption of small conductivity changes used to linearize the problem no longer holds. Yet such stimuli might occur during tactile imaging.

4.4.4. Effects of conductivity, contrast, and current

The product JT WJ in equation (4.12) depends on the sensor conductivity, the injection current, and the I–M pattern. Since J (equation (4.4)) has units of VΩm = Am S−2 (current distance/conductance2), then JT WJ has units of A2m2/S4. This term therefore increases as the square of the current and decreases as the 4th power of the conductivity.

Thus, to have the same level of influence on the sum in equation (4.12), λ must be increased or decreased commensurately with changes in current and conductivity (table 4.2). The higher the current, the larger λ2 RT R must be to challenge JT WJ. For each 10× increase in current, the size of λ must be increased by 10× for it to have the same relative strength. In other words, the numerical value for λ entered into the simulation goes up, but the image resolution will be the same. This effect is illustrated by the cross-sections in figure 4.14 (blue versus black curves). Alternatively stated, using a higher current at the same hyperparameter results in finer resolution (see also appendix E.1). On the other hand, the higher the sensor conductivity, the smaller λ must be set to have the same influence. This relationship goes as the inverse square: for each 10× increase in conductivity, the size of λ must be decreased by 100× for it to have the same relative strength, as shown in figure 4.14 (red versus black curves). (See also appendix E.2.) Note that changing the contrast ΔσΤ does not change the effective strength of λ, although it does improve noise immunity by increasing the SNR.

Table 4.2.  Scaling λ to maintain the same level of regularization, provided that variations in conductivity are small enough for the linear approximation made in the solution to hold.

Vary Direction, amount Fix For same effective regularization, change numerical value of λ
I ↑ 10 σ0, Δσ ↑ 10
Δσ ↑ 10 σ0, I
σ0 ↑ 10 Δσ, I ↓ 10−2
Figure 4.14.

Figure 4.14. Cross-sections along the center line of the image, at y = 0 (along the x-axis), through a circular target at various x-positions. The base case curves with current I = 1 × 10−3, conductivity σ0 = 10−2, and hyperparameter λ = 10−2are shown in black and gray. The curves for a 10× reduction in I and a 10× reduction in λ are shown in dashed blue at two positions of the target, at the center and near the edge. The curves for a 10× reduction in σ0 and a 100× increase in λ are shown in dashed red and employ the secondary y-axis, on which values are reduced by a factor of 10. Dotted gray lines indicate peaks at positions where the target was so close to the edge of the sensor that it touched it or extended partially outside the sensor area. For all three sets of curves λs = 10−3 (16 electrodes, I–M = 7–1, rT = 0.2, σT = 0.9σ0 (−10% target contrast), N = 10−12).

Standard image High-resolution image

We therefore introduce a scaled hyperparameter λs that allows comparison across experimental conditions:

Equation (4.13)

Thus, we can consider the values of the unscaled λ to be per (S/cm)2/A. The λs in figure 4.14 are all the same: λs = 10−3. Another scaling that has been used is to give λ in terms of JT J [31].

Increasing the current at a given σ0 results in a larger signal Δ since V = IR. A larger Δ is beneficial in the presence of a fixed noise level. Increasing I is not usually possible in medical imaging, since the current injected into the patient must be limited for safety [47, 51]. In tactile imaging, on the other hand, it is feasible to increase I, but while the issue is not one of safety, the power consumption (P = I2 R) increases, which is a problem. In addition, increasing I may result in additional noise on the signal, or may result in heating that alters the sensor's conductivity [52].

In a piezoresistive material formed by mixing conductive particles into an insulating host, σ0 is determined by the 'loading', or fraction of particles. For the experimentalist wondering whether σ0 should be high or low, decreasing σ0 (increasing resistivity) generates a larger signal Δ for a fixed injection current, and thus improves susceptibility to noise, but at the cost of greater power consumption.

A higher conductivity change or contrast Δσ can be achieved with a larger gauge factor (GF), or sensor sensitivity: GF = (ΔR/R0)/ε, where ΔR is the change in resistance and ε is the strain. (Stronger touches also result in greater Δσ, but that is not controlled by sensor design.)

Increasing the target contrast Δσ increases the peak amplitude, the relationship being linear. Unlike for changes to σ0, there is no change in resolution for a fixed λ. The gauge factor should therefore be as high as possible, an intuitively obvious conclusion. A caveat applies, however: since the inverse is approximated by solving a linear system of equations, changes in resistance are recommended to be limited to approximately 20% [19].

4.5. Regularization method—the prior

The question of the most appropriate type of regularization to employ has received much consideration [3439, 43, 44]. As summarized in [22], two main types of techniques have been widely used: 'projection methods', such as TSVD, and 'penalty methods', such as Tikhonov regularization. For the latter, the regularization matrix R, known as the prior, was introduced in section 4.4.2.

Four priors are available for use with the GN one-step algorithm, which employs Tikhonov regularization, in EIDORS: Laplace, Tikhonov (not to be confused with Tikhonov regularization generally), NOSER (Newton's one-step error reconstructor), and Gaussian_HPF (high pass filter). These have been compared in prior studies with differing conclusions. For example, Laplace, Tikhonov, and NOSER were evaluated experimentally in [39], and the shape of the feature found to be more accurate using Laplace. Simulated EIT images and cross-sections using all four priors found that the Laplace prior had the smallest amplitude error and that NOSER failed to reconstruct in the presence of noise [37]. On the other hand, it was concluded in [53] that the Gaussian HPF prior showed the best compromise between resolution and position error. NOSER, Laplace, and Tikhonov were compared, along with Markov random field (MRF), experimentally and in simulation, showing that NOSER showed poor noise performance, Laplace was the slowest, and Tikhonov resulted in artefacts [54]. The assumption in the Tikhonov prior of independent targets was said not to work well [47]. The NOSER prior was found not to reproduce conductivity changes accurately [55]. The Laplace prior has been characterized as edge-preserving [39].

The work in this book utilized the Laplace prior. To shed light on the use of this regularization method as the EIDORS default, this section compares Laplace with two of the other priors, Tikhonov and NOSER, and also with a completely different regularization approach, TSVD. There are other regularization methods that are more complex to implement, which are not examined here.

For TSVD, regularization is accomplished by simply truncating the number of terms (e.g. the number of ${\mathbf{v}}_{i}$) used in the reconstruction [26, 41]. Unlike the gradual diminution of the eigen-image amplitudes seen in figure 5.3, higher index terms are simply not included in solving the inverse problem. The number of eliminated terms is given by the number of singular values si that are greater than λ/100. For example, using λ = 1 in TSVD only the first 50 terms are retained using the simulation conditions employed below.

The three priors are implemented in EIDORS (in calc_RtR_prior.m) via the regularization matrix RT R (section 4.4). The Tikhonov prior, called zeroth order [37], is the identity matrix (when shown using elements rather than nodes). The NOSER prior instead scales R by the sensitivity J [46], so RT R = JT J. Thus, for NOSER, RT R is a diagonal matrix with values that vary in amplitude. The Laplace prior incorporates a spatial filter to penalize non-smooth image elements, which reduces 'speckle' in the image [47]. It has been called a second order high pass filter, defined as having, for a 2D sensed area, a value of 3 on each element and −1 on each adjacent element. Using the simulation conditions below, RT R had values of 6 along the main diagonal, along with 4 and −2 at occasional off-diagonal locations.

The hyperparameter does not have equivalent strength across different regularization methods [35], so image metrics at nominally the same λ are not directly comparable. Figure 4.15 shows the reconstructed feature's peak amplitude, normalized to its value at λ = 10−6, as a function of λ for the different regularization methods. Indeed, the extent of regularization at a given value of λ was greater for some than others. For example, with the Tikhonov prior the peak height was reduced more for the same nominal value of λ than with the Laplace prior, while for TSVD it was reduced less. The four curves also had different shapes, with the one for NOSER crossing over two of the others. Also shown in the figure are results with the Laplacian prior obtained solving the inverse problem using elements instead of nodes; the results were identical.

Figure 4.15.

Figure 4.15. Normalized peak amplitude as a function of hyperparameter for a circular target at x = 0.1. The four regularization approaches are compared using nodal solve, and elemental solve is also shown for the Laplace prior. The dashed gray box indicates which values of λ were used for each method in subsequent plots (16 electrodes, I–M = 7–1, rT = 0.2, I = 10−3, σ0 = 10−2, σT = 0.9σ0, N = 10−12).

Standard image High-resolution image

Using the information in figure 4.15, hyperparameters for each regularization method were chosen so that the peak amplitudes were roughly comparable: 10−1 for Laplace and NOSER, 10−2 for Tikhonov, and 100 for TSVD, with a large amount of regularization chosen to accentuate the differences in the methods. EIT images and cross-sections are shown in figure 4.16 for varying target position x. One clear difference among the approaches was the well-known tendency of the Laplace prior to stretch the feature toward the perimeter; the Tikhonov prior and TSVD method kept the feature more rounded as it approached the edge. However, at x = 0.9, that rounding resulting in feature distortion, and the Laplace prior instead more accurately represented the target position. TSVD resulted in greater ringing, and the contributions of the small number of eigen-images were more visually evident (e.g. see figure 5.3). Another difference, even more apparent in the cross-sections, was that the Tikhonov and TSVD images had rougher profiles, compared to the smoother shapes obtained with Laplace and NOSER. The 'speckle' patterns resulting from Tikhonov have previously been noted [47].

Figure 4.16.

Figure 4.16. The Laplace prior compared to three other approaches at comparable levels of regularization for I–M = 7–1 at λ = 0.1. (a) EIT images at different x-positions and (b) cross-sections at x = 0.0 and 0.7 (16 electrodes, rT = 0.2, I = 10−3, σ0 = 10−2, σT = 0.9σ0, N = 10−12, λs = 10−2).

Standard image High-resolution image

The four approaches were quantitatively compared at varying target positions using four performance metrics, as shown in figure 4.17. Looking at peak amplitude (figure 4.17(a)), the dependence on x differed among the four methods. For example, the TSVD spatial dependence was not smooth, reflecting the abrupt elimination of eigen-images rather than weighted changes in their contributions. (The complete sets of curves for each method at all λ are shown in appendix D.4.) The curves for the Laplace and Tikhonov priors were most nonlinear moving from the center to the perimeter. The Laplace prior resulted in the most smooth and uniform curve.

Figure 4.17.

Figure 4.17. For levels of regularization for the three other approaches comparable to that of the Laplace prior at λ = 0.1 (given in figure 4.15), (a) peak amplitude (b) resolution, (c) volume, and (d) position error as a function of target position (16 electrodes, 7–1, rT = 0.2, I = 10−3, σ0 = 10−2, σT = 0.9σ0, N = 10−12, λs = 10−2).

Standard image High-resolution image

Turning to resolution (figure 4.17(b)), and keeping in mind that the levels of regularization differ somewhat among the four methods so that the magnitudes are not directly comparable, the Laplace prior produced the most linear dependence on position. TSVD and Tikhonov had the most irregular position dependence of resolution, even before the flattening that occurred for all four near the perimeter. As mentioned previously, the spatial uniformity of resolution is considered to be even more important than fine resolution, since non-uniformity will result in distortion of the shape and position of larger targets [56].

The volume or 'energy' of the feature (figure 4.17(c)) was again most uniform for the Laplace prior, although the three priors were similar up to x = 0.4. In contrast, TSVD showed the feature as being significantly larger in the center, whereas the volume for the NOSER prior decreased markedly near the perimeter. Thus, the change in conductivity was most misrepresented by NOSER and TSVD. For the Laplace prior, even though the energy defocused toward the center it was preserved. Lastly, the Laplace prior produced the smallest position errors (figure 4.17(d)). It should be noted that given the differences in behavior resulting from the choice of regularization method, the method should be reported in publications.

Regarding the choice of a regularization method, it is clear why the Laplace prior is so frequently employed. It has greater uniformity, smoother spatial dependence, smaller position error, and higher fidelity to total peak energy, in addition to aesthetically pleasing images. The remainder of this book employs the Laplace prior for regularization.

Among the regularization approaches, an advantage of the Laplace prior is the preservation of feature volume over much of the sensor area and over a large range of λ, which conserves the overall intensity or 'touch energy' of the contact. If the volume of the reconstructed feature is constant, then as the hyperparameter broadens the peak height drops commensurately; the two are predictably linked. Other advantages of the Laplace prior include smooth, monotonic changes in metrics with target position and small position error.

4.6. Practical implications: experimental configurations and parameter selection

The challenge of the EIT inverse problem, as shown above, is noise, and imaging decisions revolve around the strength and type of regularization to handle it. In this chapter we introduce two spectra or fingerprints, ∣UT Δ∣ and ∣VT σ∣, that elucidate the roles of noise and regularization. The first, ∣UT Δ∣, is a measure of the signal strength for understanding the signal-to-noise ratio (SNR) in EIT, and the second, ∣VT σ∣, clearly illustrates the process of regularization. The two spectra are closely related, but the former contains only forward problem information while the latter also contains inverse problem information. Here we also begin to discuss the role of the injection–measurement (I–M) pattern (a discussion continued in sections 5.3 and 7.2) as elucidated by ∣VT σ∣ spectra.

4.6.1. Noise and ∣UTΔ∣

In EIT, even tiny amounts of noise more than a thousand times smaller than the measured voltage differences Δi can overwhelm an image because of the ill-posed nature of the inverse problem. (This topic is taken up in detail in chapter 6.) The effects of noise are illustrated in figure 4.18. In general, as noise N increases the image becomes distorted, begins to show artefacts, and then breaks up completely. Artefacts are features that appear in the reconstruction that are not associated with the target. Given the importance of identifying the correct number, locations, and strengths of touches, which may be highly variable, the onset of artifacts is unacceptable. At the lowest noise level (figure 4.18(a)), the chosen hyperparameter λ = 10−2 was large enough to correctly show a single target; the triangular shape, however, was lost at this combination of λ and noise. As the noise increased (figure 4.18(d)) it became increasingly unclear what the original target was, and finally (figure 4.18(f)) the noise dominated the image. This image deterioration is what necessitates the use of increasing levels of regularization with noise.

Figure 4.18.

Figure 4.18. Effect of increasing noise at λ = 10−2. Representative images of a triangular target at x = 0.5 for (a) N = 6 × 10−5 (b) N = 7 × 10−5, (c) N = 8 × 10−5, (d) N = 9 × 10−5, (e) N = 10 × 10−5, and (f) N = 20 × 10−5 V (16 electrodes, I–M = 7–1, rT = 0.2, I = 3 × 10−3, σ0 = 0.01 σT = 0.6σ0, noise-free).

Standard image High-resolution image

To guide experimental decisions such as how many data points to record at each electrode to reduce noise by averaging, versus spending that time instead on employing a larger number of electrodes to try to increase resolution, one needs to understand SNR in the context of EIT. There are several conceptualizations of 'signal' that can be compared with noise. It has been stated that the smallest values in Δ determine the quality of the image since they are the most changed by noise [10] and large voltages can be measured more precisely than small ones [57]. This assertion turns out not to be the case, as can be shown by simulations that apply noise selectively to the largest or smallest Δi (see chapter 6), and for reasons that become evident below. Alternatively, the average amplitude of Δ has been used as a measure of signal strength, which works well, but only for a given system and target. For example, it has been suggested that I–M patterns having a larger average Δ should be more immune to noise [5, 10, 30], but that is also not necessarily the case.

It is instructive to examine ∣UT Δ∣ spectra [20, 26, 58] to appreciate how small levels of noise can destabilize an image. The use of ∣UT Δ∣ also has the advantage of allowing comparison of different systems and targets. To judge the impact of a certain level of noise on a measurement, the signal basis vectors ui are multiplied by the measured voltage differences Δ [20]. Recall that the ui (figure 4.10) depend only on the system forward-problem parameters, such as the I–M pattern and number of electrodes. The ui with the smallest indices i correspond to the largest singular values si (figure 4.9) and thus have the most importance in determining the reconstructed image. To understand this product, return to the Fourier expansion analogy: the coefficient bn in the sum over n of bn sin(nπx/L) is found for a particular f(x) by integrating the product f(x)sin(nπx/L) over the interval –L to L. Likewise, the product UT Δ (size m × 1) gives the amplitudes of coefficients in the expansion of Δ into eigen-Δs.

The absolute values Pi = ∣ui T Δ∣ are known as Picard coefficients. The Pi have units of volts and can be directly compared to the noise. (The Pi can be found by adapting the Matlab EIDORS code; our codes are available upon request.) Using the Picard coefficients for SNR dates back to employing SVD for solving inverse problems and regularizing ill-conditioned matrices [59].

Figure 4.19(a) shows the first 50 Pi without noise for two targets, circular and triangular, plotted in order of their singular value index i on a logarithmic axis. Differences in Δ for the two targets are translated into differences in Pi . Likewise, differences in I–M pattern would also be manifested in the Pi .

Figure 4.19.

Figure 4.19. (a) The first 50 ∣UT Δ∣ terms, or Picard coefficients Pi , as a function of singular value index i for triangular and circular targets at x = 0.5, with negligible noise (N = 10−12 V). (b) The first 130 ∣UT Δ∣ terms for the triangular target with negligible noise and with noise at two levels, N = 3 × 10−9 and N = 6 × 10−9. The corresponding EIT images made using λ = 10−7 (λs = 10−8) are included as insets (16 electrodes, I–M = 7–1, rT = 0.2, I = 1 × 10−3, σ0 = 10−2, σT = 0.6σ0).

Standard image High-resolution image

Figure 4.19(b) shows the first 130 Pi for the triangular target without noise and when two levels of noise were added, indicated by the dashed horizontal lines. Without noise the Pi decay with i and then drop abruptly above i = r. Terms with index greater than the rank of J, here beyond position 104, which are nominally zero but are actually small values near 10−17 V due to numerical noise, do not play a role in the reconstruction. Raising their values by 8 orders of magnitude by adding N = 1 × 10−9 V did not interfere with the creation of a good image. With noise, the Pi instead level off at the noise level. The index i at which the leveling begins is called the Picard threshold. Using the smallest hyperparameter that permitted reconstruction, λ = 10−7, at N = 3 × 10−9 V the target was well represented, as shown by the EIT inset. At N = 6 × 10−9 V the images broke up and showed artefacts. Even though 6 nV might seem like a tiny amount of noise compared to values in Δ, which may be 10 mV (figure 4.3), it is large compared to the amplitudes of the smaller u i . The largest several ui are, however, unaffected until the noise reaches mV levels.

Thus, noise creates inaccurate images because of the incorrect amplitudes of the affected ui . Recalling from equation (4.7) that the si , ui , and ${\mathbf{v}}_{i}$ at the singular value index i are linked, the corresponding eigen-images ${\mathbf{v}}_{i}$, which then also have corrupted amplitudes, are used in the reconstruction when they should be disregarded, as shown below. This is called 'over-fitting'. As noise increases, the number of eigen-images ${\mathbf{v}}_{i}$ that can be beneficially employed in the reconstruction decreases, being equal to the number of Pi above the noise level. The ${\mathbf{v}}_{i}$ corresponding to Pi in the noise plateau are excluded by increasing λ, the mechanism of which is shown in the next section. When using truncated SVD for regularization, the truncation is chosen to start at the index i at which the Pi begin to flatten due to noise [32]. In summary, the number of ui that can be used to correctly represent the signal decreases with noise, and only the corresponding correct image components ${\mathbf{v}}_{i}$ should be retained.

Changing injection current, sensor conductivity, and contrast produces proportional up or down shifting of the ∣UT Δ∣ spectra because Δ is linearly proportional to contrast and current and inversely proportional to conductivity (section 4.4.4). (The basis functions ui are unaffected by these parameters.) Downward shifts make the system more susceptible to noise.

Noise has alternatively been compared to the singular values si . ∣UT Δ∣ and S spectra have similar appearance (see appendices D.5 and D.6). However, the si pertain to J and therefore only characterize the measurement system, whereas the Pi also carry information about the current, the target, and other factors through the measured voltage differences Δ.

4.6.2. ∣VT σand regularization

While ∣UT Δ∣ spectra are related to the input, or signal, side of the EIT process, there is an analogous spectrum on the output, or image, side, which is ∣VT σ∣. The two spectra strongly resemble each other, as would be expected from equation (4.7). However, ∣VT σ reflects the regularization used to obtain the inverse. Given the profound effects of regularization on the quality of the image (figure 4.12), it is instructive to examine its effects on the various image components. (This topic is taken up in detail in chapter 5.) Taking the product of V and the image vector σ [20], provides the coefficients, or weights, Ci of the ${\mathbf{v}}_{i}$ that sum to form the particular EIT image: Ci = ∣${\mathbf{v}}_{i}$ T σ ∣. Thus ∣VT σ∣ shows the contributions of particular eigen-images ${\mathbf{v}}_{i}$ to the final reconstructed image, as was illustrated figure 4.8. (Note that to obtain the ∣VT σ∣, the same mesh must be used for the forward and inverse problems, constituting an inverse crime.)

As a side note, the product SVT σ can also be found. At small λ, Ci = Pi /si [26] since J σ = USVT σ = Δ.

Examples of ∣VT σ∣ spectra are shown in figure 4.20 (linear axis) at λ = 10−6 with increasing noise levels, as well as at λ = 10−5 at the top noise level. The most significant coefficients Ci are shown as peaks in the plot. (Again, Ci for i above r = 104 are due to numerical rounding errors and are near zero.) The corresponding EIT images are shown above the plot, and the contributions of eigen-image ${\mathbf{v}}_{101}$ are shown in the inset.

Figure 4.20.

Figure 4.20. The effects of increasing noise and λ on the VT σ spectrum of a triangular target at x = 0.5. The corresponding EIT images are included at the top. Also shown are the ${\mathbf{v}}_{101}$ for each case, ordered top to bottom in the same way as the legend (16 electrodes, I–M = 7–1, rT = 0.2, I = 1 × 10−3 σ0 = 10−2, σT = 0.6σ0, Laplace prior).

Standard image High-resolution image

Below index i = 80, the four spectra were identical. With negligible noise (black line), the larger Ci were of comparable amplitude up to i = 104, and the EIT image was a good approximation to the target. With increasing noise, the spectra began to show increasing, unpredictable contributions from the high-index ${\mathbf{v}}_{i}$, due to the corruption of the corresponding ui (figure 4.19(b)): the ${\mathbf{v}}_{i}$ that are sensitive to noise is determined by the amplitude of the corresponding Pi . For example, the amplitude of ${\mathbf{v}}_{101}$ was virtually zero at N = 10−12 V and increased in intensity with N. At N = 2 × 10−8 V (green line) the EIT image developed ringing but still had no artifacts. At N = 3 × 10−8 V, with the incorrect Picard coefficients being even larger, the EIT image developed artifacts. At an even higher noise level, contributions of the high-frequency eigen-images dominated the final image, resulting in a solution that was no longer approximately correct. Raising λ to 10−5, the amplitudes of terms above i = 87 were dampened, effectively eliminating those eigen-images from the sum and thereby removing artefacts from the image.

As discussed above, regularization results in smoothing because above a certain i, image information, carried by the ${\mathbf{v}}_{i}$, is removed. (As phrased in [29], regularization schemes are spectral filtering methods.) Raising λ increases the number of ${\mathbf{v}}_{i}$ that are diminished, and the damping effect of λ on the Ci moves from right to left as λ increases. The images are changed in different ways depending on the I–M pattern and the prior, and to different extents depending on target position. For very large hyperparameters, which for the simulation conditions used in this book means λ = 1, only a handful of low-frequency eigen-images remain and almost all information about the contacts with the sensor is lost (figure 4.12(d)).

Turning to examine the operation of the prior, noise-free ∣VT σ∣ spectra for four regularization methods (section 4.4.1) at comparable levels of regularization, as used in figure 4.16, are illustrated in figure 4.21 for a target at x = 0.1. (These spectra have a small number of peaks because they are dominated by the circularly symmetric ${\mathbf{v}}_{i}$.) The un-regularized spectrum is shown in black. The TSVD method (figure 4.21(a), green) simply sets all the Ci above a particular index, here i = 52, to zero [41]; below that the terms were equal to those without regularization (the green and black lines in the figure overlap perfectly so that the back line is hidden). TSVD acts as a stepwise low-pass filter on i. The Tikhonov prior (blue) instead reduced the amplitudes gradually, the decrease being visually apparent starting from i = 40, where the blue line drops below the green one, and ending with near zero amplitudes at i = 62. It behaved as a ramped low-pass filter on i. The Laplace and NOSER priors (figure 4.21(b)), on the other hand, instead penalize different properties of the image damping Ci based on image criteria rather than i. Note how even at high i, some of the amplitudes did not go entirely to zero. Since the four filters have different effects on the Ci , the resulting EIT images better retain different types of information. (∣VT σ∣ spectra for a range of hyperparameters are shown in appendix D.7.)

Figure 4.21.

Figure 4.21. The ∣VT σ∣ spectra for a target at x = 0.1 obtained using minimal regularization (black line) and (a) the Tikhonov prior (λ = 10−2) and the TSVD method (λ = 100) and (b) and the Laplace and NOSER priors (λ = 10−1) (16 electrodes, I–M = 7–1, rT = 0.2, I = 10−3, σ0 = 10−2, σT = 0.9σ0, N = 10−12).

Standard image High-resolution image

4.6.3. Role of the injection–measurement pattern

VT σ∣ spectra also provide insight into differences in performance among I–M patterns, as shown in figure 4.22. At small λ, all full-rank I–M patterns provide virtually identical reconstructions, since a similar number of ${\mathbf{v}}_{i}$ are used; differences in performance only arise when regularizing. For a circular target at x = 0.0, EIT images made using λ = 10−6 (blue lines) with full-rank patterns (here 1–1 and 7–1, figures 4.22(a) and (b)) were constructed from six circularly symmetric ${\mathbf{v}}_{i}$. The indices i for these ${\mathbf{v}}_{i}$ varied with the I–M pattern. Again, more ${\mathbf{v}}_{i}$ were needed for a target at x = 0.5 (black lines). For lower rank patterns, such as 4–4 (figure 4.22(c)), the spectra terminate at smaller i, so the final EIT images are constructed from fewer ${\mathbf{v}}_{i}$ and suffer from poorer resolution and diminished peak height. The spectra make clear the nature and origin of the deleterious effects of low rank due to symmetry. At λ = 1 (red lines, overlying the blue ones at small i) the Ci were diminished except at the smallest i. For I–M = 4–4, increasing λ had no effect on the image until λ was large enough to reach i = 62.

Figure 4.22.

Figure 4.22. The absolute values of VT σ for a circular target at x = 0.0 for λ = 10−6 (blue) and λ = 100 (red) and at x = 0.5 for λ = 10−6 (black) for I–M = (a) 1–1, (b) 7–1, and (c) 4–4. (d) Index if at which Ci are first damped as a function of λ (Laplace prior) (16 electrodes, rT = 0.2, I = 10−3, σ0 = 10−2, σT = 0.9σ0, N = 10−12).

Standard image High-resolution image

Looking more closely at the spectrum for I–M = 1–1, its reputation for poor performance compared to other full-rank patterns becomes clear: the ${\mathbf{v}}_{i}$ at i = 1 and 2 were insignificant for targets away from the perimeter, and the first peak for the target at x = 0 was located not at i = 1 but at i = 8 and was small. Because the lowest index patterns are the largest contributors to an image and the last to be affected by regularization, it is clear that targets at the center of the sensor using 1–1 would suffer more from regularization than with other patterns. This is how the lower sensitivity of 1–1 than 7–1 at the center of the sensor (figure 4.6) manifests itself in the SVD.

The singular value indices if can be found at which the larger Ci (Ci > 2 × 10−4) were first lowered by more than 5% from their values at λ = 10−6, for hyperparameters between 10−5 and 100. The points can be well fit to the form i = −alog(λ) + b as shown in figure 4.22(d) for the circular target at x = 0.5 (black curves in figures 4.22(a)–(c)). The fit for 1–1 had a steeper slope than the others, showing a more rapid advance over i with λ; it also had a negative intercept, showing that even C1 was affected at λ = 1: the effect of regularization was more severe for 1–1 than other I–M patterns. (The line for 4–4 did not begin until λ = 10−3.)

To summarize, the I–M pattern affects performance in two ways. The first is symmetry, which plays a role primarily at small hyperparameters. Symmetry results in less information gathered despite an equal number of measurements, shown by a reduced rank r. Symmetry results in missing higher-order ${\mathbf{v}}_{i}$, leading to lower-quality reconstructions. (The highest-symmetry patterns also result in mirror image features, not shown here.) For full-rank patterns, at small hyperparameter it makes little difference which I–M is chosen. The second consideration applies at large hyperparameters: low sensitivity at the center of the sensor. This manifests itself in higher indices i of centrally-concentrated modes, so they are attenuated at relatively lower λ, and stronger dampening versus i for a given λ. The examination of the role of I–M pattern continues in sections 5.3, 6.2.1, and 7.2.

4.6.4. Discussion

A critical concept is that the EIT image can be considered to be a sum of eigen-images, with a resolution determined by the number of independent terms in that series whose amplitudes are greater than the noise. High symmetry measurement configurations such as polar have fewer independent terms, and thus inherently lower possible resolution.

To understand the action of the hyperparameter it can thus be helpful to look at ∣VT σ∣ spectra, which are alternative views of the EIT image in the form of fingerprints that show the coefficients Ci of the eigen-images that sum to produce the EIT image. The spectra reveal how the coefficients change with the particular coupling of injection and measurement pairs at different locations on the surface and with target properties. The hyperparameter controls the number of eigen-images contributing to the reconstruction; more specifically, λ controls the amplitudes of those eigen-images, which are systematically reduced in favor of prior information as λ increases. The prior information is typically a uniform background conductivity for a tactile sensor. Therefore, the features broaden and fall.

Since the ∣VT σ∣ spectra reflect the action of the hyperparameter, through the final image σ, they demonstrate explicitly how smoothing occurs and show the relative rate versus index i at which increases in λ affect the coefficients. The eigen-images corresponding to the highest index singular values, which are associated with the highest spatial frequency information, are removed first. With increasing λ, the number of eigen-images contributing to the EIT image drops as logλ. The rule of thumb that regularization removes image components for which the singular values si 2 < λ2 holds for a broad range of i but finally breaks down at large λ. For large hyperparameters, which for the simulation conditions used in this chapter means λ = 1, only a handful of low-frequency eigen-images remain. Thus almost all information about the contacts is lost, including the radius. The scheme governing the details of coefficient reduction is determined by the prior that is chosen. While regularization acts broadly as a low-pass filter, where the index i plays the role of frequency, the details depend on the image filtering strategy.

The I–M pattern and the target characteristics determine the original coefficients in the ∣VT σ∣ spectra. For the simulations in this chapter, the coefficients were unaffected below λ = 10−6. The particulars of which coefficients are large, and thus which associated ${\mathbf{v}}_{i}$ make a substantial contribution to the final image, provide insight into the reconstructions. Some I–M patterns yield no appreciable peaks at low index for a given target, while others have a peak immediately at i = 1. The former are thus affected at a smaller hyperparameter than the latter. These differences reflect the underlying physics: the current flow through different locations on the sensor depending on the injection pattern. For the same reason the I–M pattern determines the rate at which the hyperparameter advances across the spectrum of eigen-image coefficients. The steepness of that slope and the particular eigen-images contributing to forming the reconstructed feature determine the image metrics.

4.7. Summary and conclusions

While it is possible for experimentalists to use EIT as a 'black box' because of the user-friendliness of EIDORS, it is helpful to understand what the algorithms are doing in order to obtain the most informative images possible under given conditions. This chapter illustrated the basic manipulations and concepts behind the techniques that have been frequently used in tactile imaging without assuming a previous high level of comfort with the mathematical methods.

EIT is attractive for tactile imaging because a relatively detailed image can be created from a comparatively small number of electrodes placed only at the sensor periphery by interrogating those electrodes in multiple combinations, using them at various times for current injection and voltage measurement, as specified by the I–M pattern. The resulting frame of measurements at given time is compared to one from a reference time, yielding a noisy set of small voltage differences Δ, with many values near zero.

The continuum version of Ohm's law returns the voltage difference between any two points on the periphery, given a current flowing between two other periphery points and a knowledge of the sensor conductivity. The relationship between a conductivity change at a particular interior location and a voltage change at particular perimeter location is approximated in a linearized sensitivity matrix, or Jacobian, J, which depends on the I–M pattern. This mapping is the straightforward 'forward problem'. The 'inverse problem' involves inverting this mapping to find interior conductivity changes from measured perimeter voltage changes, which is more difficult because the matrix J is 'ill-conditioned' and highly sensitive to noise.

Visualizations of J show where on the sensor different I–M patterns have high and low sensitivities to conductivity changes. As a result of the perimeter placement of the electrodes, the terms in the sensitivity matrix J are larger at the perimeter than at the center. So, regardless of the I–M pattern, EIT is most sensitive to conductivity changes near the sensor edges, and regularization affects features in the center to a greater extent. This non-uniformity in the effects of regularization is highly detrimental to tactile images.

One technique for understanding the challenges and choices in EIT is singular value decomposition, which expresses J in terms of two orthonormal bases U and V, the former for the measured voltage differences Δ (eigen-Δs, ui ) and the latter for the predicted conductivity changes σ (eigen-images, ${\mathbf{v}}_{i}$). EIT images can be constructed from a sum of the modes ${\mathbf{v}}_{i}$, with image fidelity depending on the number of terms in this series. Higher indices i correspond to higher spatial frequency image components. Increasing the number of electrodes increases the number of terms, thereby increasing resolution, whereas high symmetry in the current injection or voltage measurement electrode placements, such as on opposite sides of the sensor, results in fewer terms and thus inherently worse reconstruction performance. Likewise, the measured voltages can be represented as a sum of the ui . Even small amounts of noise can overwhelm the higher order terms of this series, which is why noise is so detrimental.

Among the reconstruction algorithms, the GN one-step has been the most attractive for tactile imaging because of its speed, even though it has lower resolution compared to iterative algorithms. It employs a regularized inverse of J that can be pre-calculated, subsequently only requiring matrix multiplication. An examination of this ${{\bf{J}}}_{{\bf{reg}}}^{{\boldsymbol{-}}{\bf{1}}}$ illustrated how the extent of regularization is controlled by the hyperparameter λ and the manner by the prior. Hyperparameter values must be adjusted if different values of current and sensor conductivity are employed. For a 10× decrease in injected current, a hyperparameter 10× smaller provides the same influence on the EIT image, whereas for a 10× decrease in sensor conductivity, a hyperparameter 100× larger has the same effect. Increases in contrast due to stronger touches or higher GF do not affect the required λ. While regularization stabilizes the reconstruction in the presence of noise, it broadens or defocuses the reconstructed features in a spatially-dependent manner, with the effects varying with the prior. The Laplace prior, which is the one most commonly employed in tactile imaging, is advantageous from the perspective of uniformity across the sensor area. In particular, it best conserves feature volume, which is related to the strength of a tactile contact. To obtain images of reasonable resolution, to have sensitivity to small and to light touches, and to reduce non-uniformity over the sensor with its resulting distortions and masking, every effort must be made to reduce noise so that a smaller λ can be employed.

There are two sets of coefficients that are helpful to understanding the performance of an EIT system. The spectrum of Picard coefficients Pi = ∣ui T Δ∣, which have units of V, allows comparison of the various signal components for a particular scenario to the noise amplitude N. Only signal components ui with Pi > N correspond to image components ${\mathbf{v}}_{i}$ that are meaningful. Those ${\mathbf{v}}_{i}$ associated with Pi < N must be diminished by an appropriately set λ. The spectrum of coefficients Ci = ∣${\mathbf{v}}_{i}$ T σ ∣ shows the importance of the various components to the image and how they are affected by regularization. The ${\mathbf{v}}_{i}$ with the largest i, which are associated with the highest spatial frequency information, are diminished by λ first. While regularization acts broadly as a low-pass filter on based on i, the details depend on the image filtering strategy, or prior, as well as on the I–M pattern. The rank of the I–M pattern determines the maximum number of ${\mathbf{v}}_{i}$, and thus image quality at small λ. The sensitivity of the pattern at the center of the sensor determines how strongly features are defocused there at large λ.

References

  • [1]Booth M J 2000 Design and development of a distributed planar pressure sensor utilising electrical impedance tomography PhD Thesis 
  • [2]Alirezaei H, Nagakubo A and Kuniyoshi Y 2007 A highly stretchable tactile distribution sensor for smooth surfaced humanoids pp 167–73
  • [3]Nagakubo A, Alirezaei H and Kuniyoshi Y 2007 A deformable and deformation sensitive tactile distribution sensor pp 1301–8
  • [4]Silvera-Tawil D, Rye D, Soleimani M and Velonaki M 2015 Electrical impedance tomography for artificial sensitive robotic skin: a review IEEE Sens. J  15 2001–16
  • [5]Russo S, Nefti-Meziani S, Carbonaro N and Tognetti A 2017 A quantitative evaluation of drive pattern selection for optimizing EIT-based stretchable sensors Sensors 17 ARTN 1999
  • [6]Adler A, Gaggero P O and Maimaitijiang Y 2011 Adjacent stimulation and measurement patterns considered harmful Physiol. Meas. 32 731–44
  • [7]Tawil D S, Rye D and Velonaki M 2012 Interpretation of the modality of touch on an artificial arm covered with an EIT-based sensitive skin Int. J. Robot. Res. 31 1627–41
  • [8]Kaipio J P, Seppänen A, Voutilainen A and Haario H 2007 Optimal current patterns in dynamical electrical impedance tomography imaging Inverse Prob. 23 1201–14
  • [9]Dickin F and Wang M 1996 Electrical resistance tomography for process applications Meas. Sci. Technol. 247–60
  • [10]Shi X T, Dong X Z, Shuai W J, You F S, Fu F and Liu R G 2006 Pseudo-polar drive patterns for brain electrical impedance tomography Physiol. Meas. 27 1071–80
  • [11]Bera T K and Nagaraju J 2012 Studying the resistivity imaging of chicken tissue phantoms with different current patterns in Electrical Impedance Tomography (EIT) Measurement 45 663–82
  • [12]Visentin F, Fiorini P and Suzuki K 2016 A deformable smart skin for continuous sensing based on electrical impedance tomography Sensors 16 ARTN 1928
  • [13]Schuessler T F and Bates J H T 1998 Current patterns and electrode types for single-source electrical impedance tomography of the thorax Ann. Biomed. Eng. 26 253–9
  • [14]Russo S, Nefti-Meziani S, Carbonaro N and Tognetti A 2017 Development of a high-speed current injection and voltage measurement system for electrical impedance tomography-based stretchable sensors Technologies 48
  • [15]Nankani A 2019 EIT based piezoresistive tactile sensors: a simulation study MS Thesis 
  • [16]McDonald K T 2000 Physics Examples and Other Pedagogic Diversions: Resistance of a Disk  (: Princeton University) 
  • [17]Kauppinen P, Hyttinen J and Malmivuo J 2006 Sensitivity distribution visualizations of impedance tomography measurement strategies Int. J. Bioelectromag 63–71
  • [18]Chen Y 2018 Tactile sensing with compliant structures for human-robot interaction PhD Thesis  (: Department of Mechanical Engineering, University of Maryland) 
  • [19]Holder D S 2005 Introduction to biomedical electrical impedance tomography Electrical Impedance Tomography: Methods, History and Applications, Series in Medical Physics and Biomedical Engineering , ed C G Orton, J H Nagel and J G Webster (Bristol: IOP Publishing)  appendix B 
  • [20]Lionheart W, Polydorides N and Borsic A 2005 The reconstruction problem Electrical Impedance Tomography: Methods, History and Applications, Series in Medical Physics and Biomedical Engineering , ed C G Orton, J H Nagel and J G Webster (Bristol: IOP Publishing)  ch 1 
  • [21]Polydorides N and Lionheart W R B 2002 A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software project Meas. Sci. Technol. 13 1871–83
  • [22]Tehrani J N, McEwan A, Jin C and van Schaik A 2012 L1 regularization method in electrical impedance tomography by using the L1-curve (Pareto frontier curve) Appl. Math. Model. 36 1095–105
  • [23]Wang M, Wang Q and Karki B 2016 Arts of electrical impedance tomographic sensing Philosoph. Trans. A  374 20150329
  • [24]Adler A and Guardo R 1994 A neural network image reconstruction technique for electrical impedance tomography IEEE Trans. Med. Imag 13 594–600
  • [25]Zadehkoochak M, Blott B H, Hames T K and George R F 1991 Spectral expansion analysis in electrical-impedance tomography J. Phys. D  24 1911–6
  • [26]Hansen P C 2010 Discrete Inverse Problems: Insight and Algorithms, Fundamentals of Algorithms  (Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM)) 
  • [27]Bagheri R 2020 Understanding singular value decomposition and its application in data science TDS Towards Data Science  https://towardsdatascience.com/understanding-singular-value-decomposition-and-its-application-in-data-science-388a54be95d#:∼:text=In%20linear%20algebra%2C%20the%20Singular,important%20applications%20in%20data%20science  
  • [28]Strang G 2018 6. Singular value decomposition (SVD) MIT 18.065 Matrix Methods in Data Analysis, Signal Processing, and Machine Learning  https://www.youtube.com/watch?v=rYz83XPxiZo  
  • [29]Hansen P C 2010 Computational aspects: regularization methods Discrete Inverse Problems: Insight and Algorithms , ed N J Higham (Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM))  ch 4 
  • [30]Gaggero P O 2011 Miniaturization and distinguishability limits of electrical impedance tomography for biomedical application PhD Thesis 
  • [31]Tang M X, Wang W, Wheeler J, McCormick M and Dong X Z 2002 The number of electrodes and basis functions in EIT image reconstruction Physiol. Meas. 23 129–40
  • [32]Hansen P C 2010 Fundamentals of algorithms DIAS 006—Discrete Inverse Problems—Day 1, slides based on Discrete Inverse Problems: Insight and Algorithms vol 2020  (Philadelphia, PA: Society for Industrial and Applied Mathematics (SIAM)) 
  • [33]Barber D C 2005 EIT: the view from Sheffield Electrical Impedance Tomography: Methods, History and Applications of the Series Series in Medical Physics and Biomedical Engineering , ed C G Orton, J H Nagel and J G Webster (Bristol: Institute of Physics Publishing)  ch 11 
  • [34]Vauhkonen M, Vadasz D, Karjalainen P A, Somersalo E and Kaipio J P 1998 Tikhonov regularization and prior information in electrical impedance tomography IEEE Trans. Med. Imag 17 285–93
  • [35]Adler A, Dai T and Lionheart W R B 2007 Temporal image reconstruction in electrical impedance tomography Physiol. Meas. 28 S1–S11
  • [36]Dai T 2008 Image reconstruction in EIT using advanced regularization frameworks PhD Thesis 
  • [37]Islam M R and Kiber M A 2014 Electrical impedance tomography imaging using Gauss–Newton algorithm pp 1–4
  • [38]Ranade N V and Gharpure D C 2019 Enhancing sharp features by locally relaxing regularization for reconstructed images in electrical impedance tomography J. Electr. Bioimp 10 2–13
  • [39]Sarode V, Patkar S and Cheeran A N 2013 Comparison of 2-D algorithms in EIT based image reconstruction Int. J. Comput. Appl 69 6–11
  • [40]Adler A and Guardo R 1996 Electrical impedance tomography: regularized imaging and contrast detection IEEE Trans. Med. Imag 15 170–9
  • [41]Hansen P C 1987 The truncated SVD as a method for regularization BIT 27 534–53
  • [42]Lionheart W R B 2004 EIT reconstruction algorithms: pitfalls, challenges, and recent developments Physiol. Meas. 25 125–42
  • [43]Grychtol B, Elke G, Meybohm P, Weiler N, Frerichs I and Adler A 2014 Functional validation and comparison framework for EIT lung imaging PLoS One 
  • [44]Hamilton S J, Lionheart R B and Adler A 2019 Comparing D-bar and common regularization-based methods for electrical impedance tomography Physiol. Meas. 40 ARTN 044004 
  • [45]Elsanadedy A 2012 Application of Electrical Impedance Tomography to Robotic Tactile Sensing MS Thesis 
  • [46]Dai T and Adler A 2008 Electrical impedance tomography reconstruction using L1 norms for data and image terms 1–8 2721–4
  • [47]Adler A and Boyle A 2017 Electrical impedance tomography: tissue properties to image measures IEEE Trans. Biomed. Eng. 64 2494–504
  • [48]Widodo A and Endarko  2018 Experimental study of one step linear Gauss–Newton algorithm for improving the quality of image reconstruction in high-speed electrical impedance tomography (EIT) J. Phys.: Conf. Ser. 1120 012067
  • [49]Graham B M and Adler A 2006 Objective selection of hyperparameter for EIT Physiol. Meas. 27 S65–79
  • [50]Braun F, Proença M, Solà J, Thiran J-P and Adler A 2017 A versatile noise performance metric for electrical impedance tomography algorithms IEEE Trans. Biomed. Eng. 64 2321–30
  • [51]Saulnier G J 2005 EIT instrumentation Electrical Impedance Tomography: Methods, History and Applications, Series in Medical Physics and Biomedical Engineering , ed C G Orton, J H Nagel and J G Webster (Bristol: IOP Publishing)  ch 2 
  • [52]Sauerbrunn E, Chen Y, Didion J, Yu M, Smela E and Bruck H A 2015 Thermal imaging using polymer nanocomposite temperature sensors EIT Phys. Stat. Sol. A  212 2239–45
  • [53]Tawil D S 2012 Artificial skin and the interpretation of touch in human-robot interaction PhD Thesis 
  • [54]Zhou Z, Malone E, dos Santos G S, Li N, Xu H and Holder D 2015 Comparison of different quadratic regularization for electrical impedance tomography IFMBEvol 45 , ed I Lacković and D Vasicpp 200–3
  • [55]Cheney M, lsaacson D, Newell J C, Simske S and Goble J 1991 NOSER: An algorithm for solving the inverse conductivity problem Int. J. Imag. Syst. Technol. 66–75
  • [56]Adler A et al 2009 GREIT: a unified approach to 2D linear EIT reconstruction of lung images Physiol. Meas. 30 S35–55
  • [57]Zhang l, Wang H and Zhang L 2010 Single source current drive patterns for electrical impedance tomography pp 1477–80
  • [58]Correia T, Gibson A, Schweiger M and Hebden J 2009 Selection of regularization parameter for optical topography J. Biomed. Opt. 14 034044
  • [59]Hansen P C 1990 The discrete Picard condition for discrete ill-posed problems BIT Numer. Math 30 658–72
  • [60]Strang G 2015 7.4 Laplace Equation, MIT OCW RES.18-009 Learn Differential Equations: Up Close with Gilbert Strang and Cleve Moler  https://www.youtube.com/watch?v=-D4GDdxJrpg  
  • [61]Adler A 2017 Sensitivity of EIT and reconstruction - sensitivity of EIT in 2D https://eidors3d.sourceforge.net/tutorial/EIDORS_basics/sensitivity_map.shtml  
  • [62]Kauppinen P, Hyttinen J and Malmivuo J 2006 Sensitivity distribution visualizations of impedance tomography measurement strategies Int. J. Bioelectromag. 63–71

Export references: BibTeX RIS