This site uses cookies. By continuing to use this site you agree to our use of cookies. To find out more, see our Privacy and Cookies policy.
Paper The following article is Open access

Viscosity quantification using multi-contrast magnetic particle imaging

, , and

Published 2 August 2018 © 2018 The Author(s). Published by IOP Publishing Ltd on behalf of Deutsche Physikalische Gesellschaft
, , Citation Martin Möddel et al 2018 New J. Phys. 20 083001 DOI 10.1088/1367-2630/aad44b

Download Article PDF
DownloadArticle ePub

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

1367-2630/20/8/083001

Abstract

Magnetic particle imaging (MPI) is a relatively new tomographic imaging technique using static and oscillating magnetic fields to image the spatial distribution of magnetic nanoparticles. The latter being the contrast MPI has been initially designed for. However, recently it has been shown that MPI can be extended to a multi-contrast method that allows one to simultaneously image the signals of different MPI tracer materials. Additionally, it has been shown that changes in the particles environment, e.g. the viscosity have an impact on the MPI signal and can potentially be used for functional imaging. The purpose of the present work is twofold. First, we generalize the MPI imaging equation to describe different multi-contrast settings in a unified framework. This allows for a more precise interpretation and discussion of results obtained by single- and multi-contrast reconstruction. Second, we propose and validate a method that allows one to determine the viscosity of a small sample from a dual-contrast reconstruction. To this end, we exploit a calibration curve mapping the sample viscosity onto the relative signal weights within the channels of the dual-contrast reconstruction. The latter allows us to experimentally determine the viscosity of the particle environment in the range of 1–51.8 mPa s with a relative methodological error of less than 6%.

Export citation and abstract BibTeX RIS

Original content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.

1. Introduction

Tomographic imaging allows one to determine slice or volume data of living subjects in a non-invasive way. A variety of methods like for instance CT and MRI have been developed to generate contrasts from differences in physical observables within the tissue imaged. E.g. using MRI it is even possible to generate different contrasts by changes of the imaging protocol. Within this work we investigate MPI, which originally has been designed to determine the concentration of iron-oxide based tracers [1, 2]. Recently, it has been shown that MPI is also capable of decomposing the signal from multiple tracers [3] to generate multiple contrasts, which will be the subject of the present paper.

In MPI magnetic nanoparticles respond to their excitation through magnetic fields by a complex signal picked up generally by inductive coils. The induced voltage signal can be described by an integral imaging equation which is linear in the spatial distribution of the tracer material and has the so called system function as integral kernel [4, 5]. The standard MPI reconstruction approach is limited to recover a single-contrast particle distribution from a single fixed system function, which limits imaging applications to the usage of a single type of MPI tracer with fixed magnetic relaxation behavior only. The first important steps to show that these limitations are not of principle nature have been made in magnetic particle spectroscopy (MPS) [6], where it has been shown that it is possible to quantify changes in the particle environment like for instance the viscosity and temperature or a discrimination of different tracer materials [79]. But although these results were limited to MPS, they indicated that MPI has the potential to distinguish different tracer materials or the local particle environment by exploiting the related changes in the system function. This would widen the range of potential medical applications of MPI since temperature, viscosity or mobility imaging could be done simultaneously with the actual imaging of the tracer concentration [10].

In recent years it has been shown by different research groups that the multi-contrast MPS methods can be combined with spatial encoding leading to multi-contrast MPI providing multi-channel images. Depending on the method these channels are linked to different tracer materials, particle temperatures, particle mobilities or binding states. Since the individual channels can be visualized by assigning them to different colors, the method is also referred to as colored MPI in some publications. We will use the term multi-contrast MPI since this is the usual notation used to comparable methods in magnetic resonance imaging [11]. But even without multi-contrast methods it was shown that the binding state and the particle temperature lead to signal changes in the reconstructed MPI images [12, 13].

In [14] the particle response to 1D sinusoidal excitations was used to estimate the mobility or binding state of an MPI tracer. This method was improved by consecutively exciting the tracer at two different drive-field frequencies [15]. A different approach was proposed in [16, 17] where 1D x-space MPI signals were evaluated at two drive-field strength allowing to colorize an image based on the Brown and Néel relaxation mechanisms yielding qualitative mobility information of the MPI tracer particles. Furthermore, in [18, 19] a proof-of-concept was provided that viscosity mapping can be achieved through the estimation of relaxation time constants from the two half cycles of the MPI time signal. Another technique developed in [3] generalizes the calibration based algebraic image reconstruction method to exploit the differences in the corresponding particle responses for the imaging of different particle types and aggregation states. Using this approach a simultaneous particle distribution imaging and temperature mapping was achieved in [20].

The methods provided offer the possibility to qualitatively distinguish lower and higher viscosity/temperature in MPI. However, there has been no quantitative study so far. The purpose of this work is twofold. First, a general theoretical framework for multi-contrast MPI shall be given. Second, the multi-contrast temperature imaging method proposed in [20] shall be adapted for viscosity imaging and a comprehensive quantitative study thereof shall be provided.

2. Theory

In this section a general theoretical framework of multi-contrast MPI is provided. It is structured as follows. First, the MPI signal equation is generalized, to take the influence of the particle environment, different particle types and aggregation states on the particles response function into account. Second, implications of a generalized MPI signal equation on the image reconstruction are drawn with a special focus on calibration based algebraic reconstruction methods [1, 3, 21, 22].

Without loss of generality we focus on frequency space representation of the MPI signal where the induced voltage signal and the reconstruction problem are formulated and handled in frequency space. In principle, all generalizations can also be formulated in time domain which is related to the so called x-space reconstruction [5].

2.1. The signal equation

The measurement signal in MPI are voltages U(t) induced into one or more receive coils at times $t\in [0,T)$ where T is the period length of the applied measurement sequence. For simplicity we consider a single receive coil in this work. In a pre-processing step, the periodic time signal U is commonly expanded into a Fourier series with coefficients ${\hat{u}}_{k}={\int }_{0}^{T}U(t){{\rm{e}}}^{-2\pi {\rm{i}}{k}/T}{\rm{d}}t$, which allows one to filter out the feed through of the excitation signal and noisy frequency components prior to improve reconstruction [23].

The standard assumption made in MPI is that the spatial distribution of the non-interacting particles $c(\overrightarrow{r})$ and the frequency components of the measured signal can be described by an integral equation linear in the particle distribution

Equation (1)

where ${\rm{\Omega }}\subset {{\mathbb{R}}}^{3}$ is the spatial domain and ${\hat{s}}_{k}(\overrightarrow{r})$ is the system function [1, 21]. We note that this linear assumption is experimentally proven to be valid, if the particle concentration is sufficiently low [24].

Beside linearity, one key assumption made in (1) is that the magnetic response of the particles can be described by a single system function. However, this assumption is easily violated, e.g. by temporal or spatial changes of the particle temperature, viscosity or binding state, or by having multiple tracers differing in their particle core size or particle hydrodynamic size distributions inside the field of view.

2.2. The generalized signal equation

As shown in [3] the standard model can be too restrictive in cases where multiple particle types are used for imaging. We therefore propose to incorporate these cases by introducing an additional parameter space $V\subseteq {{\mathbb{R}}}^{l}$, $l\in {\mathbb{N}}$ to describe all experimentally relevant nanoparticle states such that each parameter $\overrightarrow{\tau }\in V$ is in a one-to-one correspondence to a nanoparticle state. Experimentally relevant parameters are those possibly changing the MPI signal such that in a scenario where e.g.the temperature is constant throughout an experiment, one may very well omit the temperature from the parameter space. I.e. V is a subset of a l-dimensional vector space, where each dimension encodes a degree of freedom such as the temperature, the viscosity of the particles environment, different particle types or microscopic details such as the diameter of the particle core. To shorten notation, we will denote particles in a state described by the parameter $\overrightarrow{\tau }\in V$ as $\overrightarrow{\tau }$-particles.

This additional parameter space then extends the spatial domain to ${{\mathbb{R}}}^{3}\times V$ on which a generalized particle distribution ${\mathfrak{c}}(\overrightarrow{r},\overrightarrow{\tau })$ is defined, which describes the density of $\overrightarrow{\tau }$-particles at each location $\overrightarrow{r}\in {{\mathbb{R}}}^{3}$. The generalized particle distribution is linked to the particle distribution of the single-contrast signal equation (1) by $c(\overrightarrow{r})={\int }_{V}{\mathfrak{c}}(\overrightarrow{r},\overrightarrow{\tau }){{\rm{d}}}^{l}\tau $. Likewise, a corresponding generalized system function ${\hat{{\mathfrak{s}}}}_{k}(\overrightarrow{r},\overrightarrow{\tau })$ can be defined, which describes the magnetic response of $\overrightarrow{\tau }$-particles. The generalized signal equation is then given by

Equation (2)

We note that (2) is formulated in a very general manner. Often the generalized particle distribution ${\mathfrak{c}}$ will have additional properties, which change the mathematical structure of equation (2). Three cases where a property lead to a simplification of the generalized imaging equation are discussed next.

2.2.1. Separable parameter dependency

Consider microscopic parameters such as the core size distribution of the particle core and the hydrodynamic particle diameter. Assuming that this parameter does not change with the spatial location of the particles the generalized particle distribution

Equation (3)

factorizes into the spatial particle distribution c and a state density ϱ describing the microscopic details, with ${\int }_{V}\varrho (\overrightarrow{\tau }){{\rm{d}}}^{l}\tau =1$. In this case a system function ${\hat{s}}_{k}(\overrightarrow{r}):= {\int }_{V}{\hat{{\mathfrak{s}}}}_{k}(\overrightarrow{r},\overrightarrow{\tau })\varrho (\overrightarrow{\tau }){{\rm{d}}}^{l}\tau $ incorporating all the microscopic details can be defined. Inserting ${\mathfrak{c}}$ into equation (2) and using the definition of ${\hat{s}}_{k}(\overrightarrow{r})$ one obtains the MPI standard model formulated in equation (1) as a special case of our generalized model. Since most particles used in MPI have a polydisperse core size and hydrodynamic size distribution, (3) is a common assumption.

2.2.2. Discrete set of parameters

Second, the parameter space is assumed to be discrete. I.e. there is only a finite number of experimentally relevant states within the parameter space, e.g. immobilized particles marking a medical instrument and a mobile tracer within the blood pool [25] or different particle types [3]. Let ${\overrightarrow{\tau }}_{i}\in V$, i = 1, 2, ..., n be the parameters describing the given particle types and ${c}_{{\overrightarrow{\tau }}_{i}}$ be the corresponding distribution of ${\overrightarrow{\tau }}_{i}$-particles. In this case one can link the generalized particle distribution ${\mathfrak{c}}$ with the individual distributions ${c}_{{\overrightarrow{\tau }}_{i}}$ by

Equation (4)

Within this discrete setting one can define a corresponding system function ${\hat{s}}_{k,{\overrightarrow{\tau }}_{i}}(\overrightarrow{r})=\hat{{\mathfrak{s}}}(\overrightarrow{r},{\overrightarrow{\tau }}_{i})$ for each ${\overrightarrow{\tau }}_{i}$ such that the signal equation (2) simplifies to

Equation (5)

In this model that was first considered in [3] for multi-contrast imaging, the measurement signal ${\hat{u}}_{k}$ is the superposition of a finite number of individual signals ${u}_{k,{\overrightarrow{\tau }}_{i}}$ induced by the particles of type ${\overrightarrow{\tau }}_{i}$ as described by the MPI standard model.

2.2.3. Field-like parameters

Last, one can consider field-like properties like e.g. temperature and viscosity, which take a specific value at each spatial position. In this case the state parameter is coupled to the spatial position via a vector-valued function $\varphi \,:{\rm{\Omega }}\to V$. I.e. all particles at $\overrightarrow{r}\in {\rm{\Omega }}$ are in the state $\overrightarrow{\tau }=\varphi (\overrightarrow{r})$. Therefore, ${\mathfrak{c}}(\overrightarrow{r},\overrightarrow{\tau })$ is zero for all other states and equal to the local particle density $c(\overrightarrow{r})$ for $\overrightarrow{\tau }=\varphi (\overrightarrow{r})$. We can express this in a distributional sense by

Equation (6)

Provided the map φ is fix, the signal generated by a particle distribution c is then given by

Equation (7)

where ${\hat{{\mathfrak{s}}}}_{k}(\overrightarrow{r},\varphi (\overrightarrow{r}))$ could be seen as a conventional system matrix.

2.3. Image reconstruction

Image reconstruction in MPI aims to invert the forward problem given by the MPI signal equation. It involves the solution of an inverse problem by appropriate methods. In general one can differentiate between algebraic methods that treat the signal equation in a general way [21, 26, 27] and analytic methods that exploit its structure and derive direct reconstruction formulas [5, 28]. A good overview about these different approaches can be found in [29]. In the current work we formulate the reconstruction formulas in an algebraic sense although it would also be possible to apply analytical methods, like the x-space reconstruction.

The goal of a single-contrast image reconstruction is to obtain $c(\overrightarrow{r})$ from measurements ${\hat{u}}_{k}$ by inverting equation (1). Similarly, the goal of a multi-contrast image reconstruction is to determine ${\mathfrak{c}}(\overrightarrow{r},\overrightarrow{\tau })$ from the measurements ${\hat{u}}_{k}$ by inverting equation (2). However, since ${\hat{{\mathfrak{s}}}}_{k}(\overrightarrow{r},\overrightarrow{\tau })$ is usually nonlinear in $\overrightarrow{\tau }$ we expect the related inverse problem to be even more difficult to tackle than the single-contrast image reconstruction problem. Even if a solution to the inverse problem exists, iterative nonlinear solvers are not guaranteed to find it since the solver might get stuck at local minima. Prior to discussing the most general case, the single-contrast reconstruction will be recapped and strategies for handling the multi-contrast signal equation for discrete parameter sets (5) and field-like parameters (7) will be discussed.

2.3.1. Single-contrast image reconstruction

As initially proposed in [1], MPI reconstruction was performed in a single-contrast fashion by solving the inverse problem (1) for a given measurement and system function. In practice, the signal equation is first transformed into a discrete linear system of equations, which is then solved by linear least squares minimization [1, 21, 22]. From the point of view of our generalized signal equation (2) single-contrast image reconstruction uses a fixed system function ${\hat{s}}_{k,{\overrightarrow{\tau }}_{m}}(\overrightarrow{r})={\hat{{\mathfrak{s}}}}_{k}(\overrightarrow{r},{\overrightarrow{\tau }}_{m})$ and optimizes the distribution ${c}_{{\overrightarrow{\tau }}_{m}}(\overrightarrow{r})={\mathfrak{c}}(\overrightarrow{r},{\overrightarrow{\tau }}_{m})$ of ${\overrightarrow{\tau }}_{m}$-particles,

Equation (8)

Prior to solving this linear system of equation it is reduced by filtering out noisy frequency components. The reduced least squares problem can then be efficiently solved by regularized and weighted iterative methods as discussed in [27].

However, despite the common usage of (8) it is not ensured that the least squares minimizer ${c}_{{\overrightarrow{\tau }}_{m}}(\overrightarrow{r})$ equals the actual distribution of particles $c(\overrightarrow{r})$. This is due to a possible mismatch between ${\overrightarrow{\tau }}_{m}$ that was used during the calibration of ${\hat{s}}_{k{\overrightarrow{\tau }}_{m}}$ and the parameter $\overrightarrow{\tau }$ that is valid during the object measurement. In a sense, ${c}_{{\overrightarrow{\tau }}_{m}}(\overrightarrow{r})$ provides the best-approximation of the measurement signal when considering a distribution of ${\overrightarrow{\tau }}_{m}$-particles only. It has already experimentally been proven that a mismatch between the system function $\overrightarrow{\tau }$ and the object $\overrightarrow{\tau }$ leads to serious artifacts and non-quantitative results [30].

2.3.2. Multi-channel image reconstruction for discrete parameter sets

Consider a setting with a discrete parameter set as described by signal equation (5), i.e. the measurement signal ${\hat{u}}_{k}$ can be written as a superposition of the individual signals of a finite number of MPI tracers. In this case the single-contrast reconstruction approach can be straight forwardly extended by taking a finite number of calibrated system functions ${\{{\hat{{\mathfrak{s}}}}_{k}(\overrightarrow{r},{\overrightarrow{\tau }}_{m})\}}_{m=1}^{l}$ into account [3]. The optimization problem (8) can then be reformulated to

Equation (9)

In case the calibration measurements ${\hat{s}}_{k{\overrightarrow{\tau }}_{i}}={\hat{{\mathfrak{s}}}}_{k}(\overrightarrow{r},{\overrightarrow{\tau }}_{i})$ $\forall i$ match the particles during the measurements, the reconstructed distributions ${c}_{{\overrightarrow{\tau }}_{m}}$ equal the respective spatial distribution of ${\overrightarrow{\tau }}_{m}$-particles. In case the MPI tracers used during the experiment are not matched with the system functions, one cannot directly link the generalized distribution ${c}_{{\overrightarrow{\tau }}_{m}}$ to a spatial particle distribution. One major challenge of the multi-contrast reconstruction method as formulated in (9) is that an increase of the number of system functions (i.e. discretization steps l) will have a negative influence on the ill-posedness of the inverse problem.

2.3.3. Multi-channel image reconstruction for field-like continuous parameter spaces

The signal of measurement scenarios with field-like continuous parameter spaces is described by signal equation (7). In this scenario image reconstruction aims to reconstruct both, the distribution of the particles and the vector-valued function φ describing the spatial dependence of the parameter $\overrightarrow{\tau }\in V$

Equation (10)

Since $\overrightarrow{\tau }$ directly describes the properties of the tracer it makes no sense assign $\overrightarrow{\tau }$ some specific value at a location where there is no tracer. Thus one would impose further restrictions upon φ to ensure the uniqueness of a solution.

The main issue with the optimization (10) is that ${\hat{{\mathfrak{s}}}}_{k}$ needs to be known. Since measuring ${\hat{{\mathfrak{s}}}}_{k}$ for a single parameter $\overrightarrow{\tau }$ is already quite tedious, a measurement based approach as for the single-contrast scenario seems to be unpractical. Alternatively, one could simulate ${\hat{{\mathfrak{s}}}}_{k}$ using a model based approach. However, current models are either too simple to reproduce the rich magnetization dynamics of the particles or numerically too costly to use.

2.3.4. General multi-channel image reconstruction

Likewise, the aim of the general multi-channel image reconstruction is to obtain the generalized distribution ${\mathfrak{c}}(\overrightarrow{r},\overrightarrow{\tau })$ from a given measurement uk and generalized system function ${\hat{{\mathfrak{s}}}}_{k}(\overrightarrow{r},\overrightarrow{\tau })$

Equation (11)

Here too the main problem is the current unavailability of a suitable generalized system function.

As done in [20], one can circumvent this problem by a discretization of the parameter space

Equation (12)

where ${c}_{{\overrightarrow{\tau }}_{m}}(\overrightarrow{r})={w}_{m}{\mathfrak{c}}(\overrightarrow{r},{\overrightarrow{\tau }}_{m})$ with appropriate discretization weights wm. This discretization reduces the general image reconstruction problem in equation (11) to the multi-channel image reconstruction for discrete parameter sets (5), yielding l reconstructed distributions ${c}_{{\overrightarrow{\tau }}_{m}}$. For instance in [20] two measured system functions for particles of low temperature ${\overrightarrow{\tau }}_{\mathrm{low}}$ and high temperature ${\overrightarrow{\tau }}_{\mathrm{high}}$ were used to obtain temperature and particle distribution from the corresponding reconstruction channels ${c}_{{\overrightarrow{\tau }}_{\mathrm{low}}}$ and ${c}_{{\overrightarrow{\tau }}_{\mathrm{high}}}$.

Within the experimental part of this work we will proceed quite similarly. We will obtain two system functions at low and high viscosity and perform the multi-channel reconstruction in equation (11). The mapping between the signal distribution of the reconstructed channels and the viscosity will be obtained by a calibration procedure. A detailed description of this procedure can be found in section 4.

3. Experimental setup

The magnetization dynamic of magnetic nanoparticles is strongly influenced by the viscosity of their environment [7]. To be able to quantify the viscosity from MPI measurements samples with different viscosities are required. In the following section we will provide a detailed information on the sample preparation, an estimation of the viscosity errors and thorough description of the MPI measurement protocol.

3.1. Water-glycerol mixtures

A series of viscosity samples was prepared by mixing ferucarbotran (Resovist, FUJIFILM RI Pharma Co., Ltd, Japan) with an iron concentration of 500 mmolFe l−1, glycerol and distilled water. The first seven samples M1 to M7 have log spaced viscosities 1.0 mPa s, 1.93 mPa s, 3.73 mPa s, 7.20 mPa s, 13.9 mPa s, 26.8 mPa s, 51.8 mPa s, whereas the last three U1, U2, and U3 have viscosities of 2.8 mPa s, 20.4 mPa s, 100 mPa s at a room temperature of 23 °C. A total amount of V = 50 μl was prepared per mixture, each with an iron concentration of cFe= 50 mmolFe l−1.

As there is no adequate theoretical description on the viscosity of water-glycerol mixtures we use an empirical formula describing the viscosity of a water-glycerol mixture η [31]. It is given by

Equation (13)

where ηw and ηg are the respective viscosities of water and glycerol and α is a weighting factor depending on the temperature and the glycerol mass fraction ${C}_{{m}_{g}}$. This formula was numerically inverted to obtain ${C}_{{m}_{g}}$ for each of the desired mixture viscosities. At a lab temperature of 22 °C the densities of water and glycerol are ϱw = 997.6 kg m−3 and ϱg = 1263 kg m−3. The glycerol volume fraction of the mixture is given by

Equation (14)

where ${C}_{{m}_{g}}$ is the glycerol mass fraction and the corresponding mass and volume fraction of water are given by ${C}_{{m}_{w}}=1-{C}_{{m}_{g}}$ and ${C}_{{V}_{w}}=1-{C}_{{V}_{g}}$. With these the samples were prepared in three steps.

  • 1.  
    Ferucarbotran was diluted with distilled water to lower the iron concentration down to $\tfrac{{c}_{\mathrm{Fe}}}{{C}_{{V}_{w}}}$ to ensure that the final mixture had the desired iron concentration of cFe, where the assumption is made that the diluted ferucarbotran has the same viscosity as distilled water. Next, a volume of $V(1-{C}_{{V}_{g}})$ was pipetted from the diluted ferucarbotran.
  • 2.  
    A mass of ${{VC}}_{{V}_{g}}{\varrho }_{g}$ of glycerol was weighted using a high precision balance instead of having to pipette the extremely viscous liquid.
  • 3.  
    Diluted ferucarbotran and glycerol were mixed and filled into a glass capillary with an inner diameter of 1.3 mm. The total height of the mixture within the capillary was about 9.4 mm.

An overview over all samples with their viscosity, glycerol mass fraction, glycerol volume fraction and iron concentration of the ferucarbotran part can be found in table 1.

Table 1.  The table shows a complete list of all 50 μl samples prepared with their desired viscosity η. Additionally, the glycerol mass and volume fraction ${C}_{{m}_{g}}$ and ${C}_{{V}_{g}}$, the iron concentration cFe and volume of the water part of the mixture and the mass of the glycerol part required for the preparation of the samples are listed.

Sample name M1 M2 M3 M4 M5 M6 M7 U1 U2 U3
η(mPa s) 1.0 1.93 3.73 7.20 13.9 26.8 51.8 2.8 20.4 100
${C}_{{m}_{g}}$ 0.018 2 0.248 4 0.417 6 0.546 3 0.647 6 0.729 0.796 2 0.349 9 0.697 2 0.852 3
${C}_{{V}_{g}}$ 0.014 4 0.207 0.362 0.488 0.592 0.680 0.755 0.298 0.645 0.820
cFe (mmolFe l−1) 50.7 63.1 78.3 97.6 122.6 156.3 204.4 71.3 140.9 278.0
Vw (μl) 49.3 39.6 31.9 25.6 20.4 16.0 12.2 35.1 17.7 9.0
mg (mg) 0.910 13.071 22.831 30.781 37.382 42.932 47.685 18.836 40.735 51.777

3.2. Sample viscosity error estimation

In this paragraph the relative uncertainty of the viscosity will be calculated using the law of propagation of errors. This uncertainty is mainly influenced by uncertainties during the sample preparation, temperature changes during the experiment, and uncertainties due to the empirical nature of equation (13). Whenever the propagated error of a source depends on the viscosity the maximum relative error is taken as a bound for all samples.

The preparation of the desired glycerol mass fraction is mainly influenced by the accuracy of the pipette, the accuracy of the balance and temperature deviations during sample preparation. Pipetting could be done with a relative error of no more than 0.7% for the smallest volumes of 11.6 μl. The balance used had a relative error of 0.01%. The pipetting error directly propagates to a relative 0.7% error of the glycerol mass fraction ${C}_{{m}_{g}}$ and the weighting step accumulates an relative error of 0.01%. Temperature uncertainties slightly affect the pipetting step due to changes in the water density. However, even uncertainties of 4 °C would only lead to a relative error of about 0.1% in ${C}_{{m}_{g}}$. Since all three sources of error are independent from each other the total relative error of the desired glycerol mass fraction is 0.7%. The propagated relative viscosity error stemming from the glycerol mass fraction depends on the value of ${C}_{{m}_{g}}$ and can be bound by 4%.

The temperature deviations inside the scanner bore were recorded using a fiber optical thermometer (Fotemp, Weidmann Technologies Deutschland GmbH, Germany). During measurements temperature ranged from 21.5 °C to 22.5 °C. These changes cause a uncertainty in the weighting factor α and the viscosities of glycerol and water. Propagation of the 0.5 °C uncertainty yields a relative error of 6%. According to [31] the average relative prediction error of the viscosity formula (13) is below 2.3%. Accumulation of these independent errors yields a total relative uncertainty in the viscosity of 7.5%.

3.3. MPI acquisition parameters

MPI measurements were carried out using a preclinical MPI scanner (Philips preclinical MPI package with a Bruker preclinical MPI system). The system was operated with a gradient field of 1.0 T μ0−1 m−1, 1.0 T μ0−1 m−1, $2.0\,{\rm{T}}\,{\mu }_{0}^{-1}\,{{\rm{m}}}^{-1}$ in x-, y-, and z-direction and a drive-field amplitude of 12 mT μ0−1 in x- and z-direction, respectively, where μ0 is the vacuum permeability. In our system the x-direction coincides with the direction of the bore, the z-axis points upwards and the y-direction is the remaining direction in the horizontal plane orthogonal to the bore.

Particle excitation is done by moving the field free point on a 2D Lissajous trajectory using excitation frequencies of 2.5/102 MHz and 2.5/99 MHz. This results in a cycle duration of 652.8 μs and a drive-field field of view of 24 × 12 mm in the xz-plane. During each measurement two coils in x- and z-direction pick up the induced voltage signal. These two signals are then filtered and digitized with a bandwidth of 2.5 MHz. The final measurement vector is a concatenation of the Fourier transformed time signals. For each measurement a sample was attached to the arm of a three axis robot (Isel Automation GmbH) with the glass capillary orthogonal to the excitation plane to ensure precise placement within the field of view.

In total two system matrix and ten regular measurements were performed. For general information on MPI measurements we refer to [32]. The two calibration measurements were obtained on a 26 × 12 mm system matrix field of view (SM-FoV) with a voxel size of 1 × 1 mm by averaging 5000 MPI measurements at each grid position for the samples M1 and M7, respectively. For each of the two samples the measurements were collected and post processed into a system matrix. Henceforth we will refer to the system matrix of sample M1 by S1 and the one of M7 by S7. For the regular measurements each of the samples was placed in the center of the SM-FoV, obtaining 5000 MPI measurements for each one.

4. Viscosity quantification

The viscosity of the samples is estimated in a three step procedure as shown in figure 1. The proposed method follows the basic idea of temperature mapping in [20]. The three steps, multi-channel image reconstruction, image processing, and viscosity estimation, are outlined as follows.

4.1. Multi-channel image reconstruction

From the set of all recorded frequencies only those above 80 kHz and with a signal to noise ratio above 5 are selected. The remaining frequencies are removed from the measurement vectors and system matrices. To assess the influence of noisy data the 5000 measurements were block averaged with a block length Mavg of 100, 50, 20, 10, 5, 2, 1 resulting in a total of 50, 100, 250, 500, 1000, 2500, 5000 reconstructed multi-channel images per sample.

Using the two system matrices S1 and S7 a multi-channel image reconstruction proposed in [3] was performed. I.e. for each measurement vector $\hat{{\bf{u}}}$ the Tikhonov regularized optimization problem

Equation (15)

is solved. Here, N = 26 × 12 × 1 is the total number of voxels within each channel, $\tilde{\lambda }$ is the regularization parameter and c1 and c7 are the channels of the multi-channel reconstruction. Usually the regularization parameter is not reported directly, but the parameter $\tilde{\lambda }=\lambda N/\parallel \left(\begin{array}{cc}{{\bf{S}}}_{1} & {{\bf{S}}}_{7}\end{array}\right){\parallel }_{{\rm{F}}}$, where $\parallel \cdot {\parallel }_{{\rm{F}}}$ denotes the Frobenius-norm. Equation (15) is solved iteratively with 10000 iterations of the Kaczmarz solver and $\tilde{\lambda }={10}^{-5}$.

4.2. Image processing

Each reconstructed image contains two channels c1 and c7 as shown in figure 1, with the sample signal located in the center. For each image the signal from each channel is collected by summation over a 6 × 4 mm region of interest in the center. Note that the width of the ROI is larger than its height to account for the lower gradient strength in x-direction. We refer to the summed signals from channel c1 as σ1 and to channel c7 as σ7, respectively. Depending on the averaging block size Mavg one ends up with 50, 100, 250, 500, 1000, 2500, 5000 values σ1 and σ7 for each sample, as shown in figure 2.

In the last processing step the channel weight

Equation (16)

is calculated. It assigns each tuple (σ1, σ7) a number in the interval $[0,1]$ which is invariant under rescaling $({\sigma }_{1},{\sigma }_{7})\mapsto (\alpha {\sigma }_{1},\alpha {\sigma }_{7})$, $\alpha \in {{\mathbb{R}}}_{+}$. The idea is to quantify the relative signal weight of the channels. With λ = 1 only c1 contains signal from a sample, whereas with λ = 0 only c7 contains signal. This definition is purely heuristically and motivated by the observation that the relative signal weight contained within the channels shifts from c1 to c7 as the viscosity of the sample increases as shown in figure 1. The complete mapping from MPI measurement data to $\lambda \in [0,1]$ is summarized in figure 3.

Figure 1.

Figure 1. Exemplary multi-channel reconstruction of the first 5 averaged frames of measurement M4. The two reconstruction channels corresponding to the system function M1 (top left) and M7 (top right). The region of interest with the sample signal is enclosed by the white box in the center. Below the region of interest of a multi-channel reconstruction of the first 5 averaged frames of measurements M1 to M7, U1, U2, and U3 is shown. The two reconstruction channels correspond to the system function M1 (top row) and M7 (bottom row). From left to right measurements are ordered with respect to the viscosity of the water-glycerol mixture of the sample with low viscosities on the left and high viscosities on the right. All subplots use the same colormap, which is shown at the bottom.

Standard image High-resolution image

4.3. Viscosity estimation

Viscosity estimation is based on a calibration curve. The curve is obtained by using every fourth data point from the data sets of M1 to M7 with Mavg = 100 as training data for a piece wise linear curve. All remaining data, including ${M}_{\mathrm{avg}}\ne 100$ is used for testing of the viscosity estimation. The corresponding values of λ(σ1, σ7) of the training data are averaged for each sample and paired with the samples viscosity. As shown in figure 4 the values λ decrease monotonously with increasing viscosity.

Based on the data points we define a monotonous function λ(η) using linear interpolation in between and linear extrapolation beyond the data points as shown in figure 4. This in turn allows one to define the inverse mapping η(λ) as a simple root finding problem

Equation (17)

This mapping is used to estimate the viscosities of the remaining data sets as shown exemplary in figure 5. For each sample and block averaging number the mean viscosity ηestim, the standard deviation σ and systematic deviation ${\rm{\Delta }}\eta =| {\eta }_{\mathrm{sample}}-{\eta }_{\mathrm{estim}}| $ from the sample viscosity ηsample were calculated as listed in table 1.

5. Results

Within this section we are going to present the main observation made for each step of the viscosity estimation as summarized in figure 3. We start by surveying the multi-channel reconstruction, continue by examining the image processing step and finish our observations with a detailed error analysis of the estimated viscosities.

5.1. Multi-channel image reconstruction

All multi-channel reconstructions yield a two-channel image with the signal from the sample located in the center as shown in figure 1. We observed a strong correlation between block averaging length Mavg and image noise, with low noise for Mavg = 100 and increasing noise for smaller Mavg. Moreover, the multi-channel reconstruction maps signal from low viscosity samples M1, M2, and U1 predominantly to the channel M1 and signal from high viscosity samples M6, M7, and U3 predominantly to the channel M7. For samples with medium viscosity M3, M4, M5, and U2 one observes quite significant changes in the signal distribution between both channels as shown in figure 1. Though only one reconstruction is shown, this trend can be observed for all reconstructed data sets. In most cases this allows one to visually discriminate low, medium, and high viscosity samples. Due to improved signal to noise ratio discrimination works better for large block averaging length. Due to the fact that the signal shifts nonlinearly between the channels, we were not able to estimate the viscosity solely based on the visual perception of the reconstructed two-channel images.

Figure 2.

Figure 2. Box plot of the signal distribution for the signals σ1 (top row), σ7 (2nd row), and the relative channel weight λ(σ1, σ7) is shown for two averaging block length Mavg = 1 (left) and Mavg = 100 (right). The boxes contain all data points from the 25th to the 75 percentile split by the median, while the whiskers contain the remaining data points. The larger boxes and whiskers the wider is the distribution of the respective signals σ1, σ7 and λ(σ1, σ7). As a general trend the width of the distributions shrink with increasing Mavg. Moreover, the relative signal weight allows a qualitative discrimination of all samples for Mavg = 100. For Mavg = 1 some distributions of λ(σ1, σ7) overlap preventing a reliable discrimination.

Standard image High-resolution image

5.2. Image processing

Further processing of each image yields signal weights σ1 and σ7. Due to the aforementioned noise in the reconstructed images one observes a more or less wide spread distribution of σ1 and σ7. The distribution is wider for low Mavg and narrower for large Mavg as shown in figure 2 for each sample, Mavg = 1 and Mavg = 100. Independent of the sample one observes a reduction of the distribution width by a factor of about seven as the block averaging number increases from Mavg = 1 to Mavg = 100. The only exception is sample M1 with Mavg = 50 and Mavg = 100, where the width of the distribution is exactly zero. As λ(σ1, σ7) is calculated directly from σ1 and σ7 we find that the observations above also apply for the relative signal weight.

Figure 3.

Figure 3. The diagram summarizes all processing steps of the viscosity estimation. First, the measurement $\hat{u}$ of a sample is multi-channel reconstructed using the system matrices S1 and S7. Second, the multi-channel image is processed to obtain the relative signal weight. At last, the viscosity is estimated by inverting a data driven calibration curve.

Standard image High-resolution image
Figure 4.

Figure 4. A quarter of the data sets of M1 to M7 with Mavg = 100 is used to obtain the calibration points shown above. These points are used to create a piece wise linear model, which can be inverted to estimate the viscosity of the remaining data sets.

Standard image High-resolution image

Next we study the mean value of σ1, σ7, and λ(σ1, σ7). These mean values mainly depend on the sample and show only slight variations with the block averaging length. For the mean value of σ1, σ7, we observe a non-monotonous change with respect to the sample viscosity as shown in figure 2, where the samples are sorted with respect to their viscosity. A quantitative analysis reveals that on average σ1 is larger for sample M2 than for sample M1. From there on σ1 decreases as the viscosity increases. Likewise, the mean signal of σ7 increases monotonously from sample M1 to M6 and decreases from there on, i.e. for the last two samples M7 and U3. Contrary, the mean relative signal weight λ(σ1, σ7) decreases with increasing viscosity for block averaging lengths of 5, 10, 20, 50, 100, which is the basis of our viscosity estimation.

5.3. Viscosity estimation

Using the method proposed in this work, we are able to estimate the viscosity of all samples. Due to the noise in the signal the estimated viscosities are distributed around a mean value for all block averaging length and samples. The distributions are wider for smaller Mavg and narrower for larger Mavg as shown in figure 5 for block averaging length of 1, 10, 100. The mean estimated viscosity, its standard deviation and its systematic deviation from the sample viscosity are listed in table 2.

Figure 5.

Figure 5. Viscosity estimation of all samples for averaging block lengths of Mavg = 100 (left), Mavg = 10 (middle), and Mavg = 1 (right). The plots in the top row show a box plot of the estimated viscosity values with the boxes containing all points from the 25th to the 75 percentile split by the median. Horizontal lines indicate the viscosities of the samples M1 to M7 and U1 to U3 in increasing order and logarithmically spaced. The bottom row shows the corresponding relative prediction errors (ηestimated − ηsample)/ηsample. The upper whiskers of M1, M2 and M6 are cut off and would end at 280%, 131% and 111%, respectively. As the block averaging length decreases, systematic and statistical error of the estimation increase. Note that the viscosity of sample U3, which lies far off the range of the calibration samples, is severely underestimated in all cases.

Standard image High-resolution image

Table 2.  The table shows a complete list of all samples with their prepared (ηsample) and mean estimated (ηestim) viscosity. Additionally, the absolute and relative (σ/ηestim) standard deviation of the estimated viscosities is provided. Moreover, the systematic deviation between estimated and sample viscosity (${\rm{\Delta }}\eta =| {\eta }_{\mathrm{sample}}-{\eta }_{\mathrm{estim}}| $) is listed. The estimated values are in good agreement with the true values, except for sample U3 and low block averaging numbers Mavg = 1 and Mavg = 2.

Sample name M1 M2 M3 M4 M5 M6 M7 U1 U2 U3
ηsample (mPa s) 1.0 1.93 3.73 7.20 13.9 26.8 51.8 2.8 20.4 100
ηestim (mPa s, Mavg = 100) 1.0() 1.9(1) 3.7(1) 7.3(1) 14.0(4) 28(1) 52(2) 2.7(1) 21(4) 62(2)
σ/ηestim (%, Mavg = 100) 0 6 4 2 3 5 3 2 5 4
Δη/ηestim (%, Mavg = 100) 0 1 1 1 1 3 0.1 3 4 38
ηestim (mPa s, Mavg = 50) 1.0() 1.9(2) 3.7(2) 7.2(2) 14.0(4) 27(1) 51(3) 2.7(2) 21(5) 61(3)
σ/ηestim (%, Mavg = 50) 0 8 5 3 3 5 5 2 6 6
Δη/ηestim (%, Mavg = 50) 0 0.4 1 1 1 2 1 3 5 39
ηestim (mPa s, Mavg = 20) 1.000(3) 1.8(2) 3.6(2) 7.2(3) 14.0(6) 27(2) 51(4) 2.6(2) 20.8(8) 58(6)
σ/ηestim (%, Mavg = 20) 0 10 6 4 4 8 8 4 8 10
Δη/ηestim (%, Mavg = 20) 0 3 1 0.1 1 2 3 2 6 42
ηestim (mPa s, Mavg = 10) 1.00(3) 1.8(3) 3.6(3) 7.2(4) 14.0(8) 27(3) 49(5) 2.6(3) 21(1) 55(7)
σ/ηestim (%, Mavg = 10) 3 14 8 5 6 10 11 5 10 13
Δη/ηestim (%, Mavg = 10) 0.4 5 2 1 0.6 1 5 1 8 45
ηestim (mPa s, Mavg = 5) 1.0(1) 1.7(3) 3.5(4) 7.0(5) 14(1) 27(3) 46(7) 2.5(4) 20(1) 50(10)
σ/ηestim (%, Mavg = 5) 9 19 11 7 8 12 16 7 15 20
Δη/ηestim (%, Mavg = 5) 3 9 5 2 1 1 12 1 11 50
ηestim (mPa s, Mavg = 2) 1.4(4) 1.7(4) 3.4(6) 6.8(7) 13(2) 25(4) 36(9) 2.4(5) 20(2) 40(10)
σ/ηestim (%, Mavg = 2) 30 26 18 11 12 15 27 11 21 34
Δη/ηestim (%, Mavg = 2) 40 11 8 5 4 8 31 4 13 64
ηestim (mPa s, Mavg = 1) 1.5(5) 1.7(5) 3.3(8) 7(1) 13(2) 23(5) 30(10) 2.4(6) 19(3) 30(10)
σ/ηestim (%, Mavg = 1) 34 29 24 16 17 22 33 16 25 40
Δη/ηestim (%, Mavg = 1) 54 9 10 8 8 13 40 8 13 69

A closer analysis reveals that the viscosities of the samples M1 to M7, which were also used in the calibration process, were best approximated. On average estimated and sample viscosity differed up to 3%, 2%, 3%, 5%, 12%, 40%, 54% for block averaging length of 100, 50, 20, 10, 5, 2, 1, respectively. Down to Mavg = 10 the sample viscosity lies within an interval of one standard deviation around the mean estimated viscosity. Assuming independence of systematic and statistical errors a total methodological error of the estimation of up to 6%, 8%, 10%, 15%, 21%, 50%, 64% for block averaging length of 100, 50, 20, 10, 5, 2, 1 can be calculated.

As for the samples U1, U2, and U3, which were not used for calibration we need to distinguish between the samples U1 and U2 with viscosities within the range of the calibration samples and U3 with a viscosity outside the calibration rage. For the viscosity estimation of U1 and U2 one observes a higher average difference between estimated and sample viscosity for block averaging length of 100, 50, 20, 10 and lower ones for block averaging length of 5, 2, 1. In total the methodological errors are up to 6%, 8%, 10%, 13%, 18%, 25%, 28% block averaging length of 100, 50, 20, 10, 5, 2, 1. The viscosity of sample U3 is significantly underestimated with its prepared value being 100 mPa s and its estimated value ranging between 62 mPa s for Mavg = 100 to 30 mPa s for Mavg = 1. Performing a separate analysis, we found that an inclusion of U3 into the set of calibration samples reduces this problem to the point, where the sample U3 is as good approximated as U1 and U2.

6. Discussion

Using an adaption of the temperature mapping method proposed in [20] we provided the first viscosity quantification of a series of samples within a viscosity range of 1.0–51.8 mPa s with a methodological error of 6%. Thereby, extending the qualitative proof-of-concept given in [19] for a viscosity range of 0.89 mPa s to 15.33 mPa s by our quantitative results. This allows a first comparison of MPI viscosity mapping capabilities with spectroscopic methods which reported methodological errors as low as 0.1% [7] within a viscosity range of 0.96–260 mPa s.

The total error of the viscosity estimation is composed of a methodological estimation error and a calibration error. The calibration error stems from the sample preparation using an empirical formula [31] describing the viscosity of a water-glycerol mixture causing a 7.5% uncertainty in the viscosities of our samples. By direct measurements of the sample viscosities one would likely be able to decrease this error to the point, where it can be neglected compared to the methodological error, which is why we focused on the analysis of the methodological error in this work. If a direct measurement is not possible we recommend to take the recently published work of [33] into account for sample preparation as it takes the volume contraction of the mixture into account and contains a better approximation of the water, glycerol and water-glycerol-mixture densities. The methodological error is composed of a systematic and a statistical component. The systematic component originates from the calibration step of our viscosity estimation. As shown exemplary for sample U3 the systematic error can be huge if one tries to quantify viscosities outside the calibration range. If one would instead include the measurements of U3 into the calibration process, this systematic error would decrease significantly. Moreover, we found that the systematic deviation was lowest for viscosities close to the calibration points. Hence, one could increase the discretization rate of the curve, or use a data driven smooth calibration curve like a B-spline or a polynomial instead of the piece wise linear model to further decrease this systematic error. The statistical error is mainly influenced by the block averaging length, which is in a one-to-one correspondence to the signal to noise ratio of our MPI measurement data. Therefore, we expect that any further reduction of this error can be achieved by directly increasing of the signal to noise ratio. This on the other hand increases the spectral resolution of the viscosity estimation, i.e. the minimal difference of two viscosities that can be distinguished based on their estimated values, as is indicated by the dependence between the block averaging length and the distribution width of the estimated viscosities.

Apart from our experimental results we introduced a theoretical framework generalizing the single-contrast signal equation [4] to cases where multiple signal sources, e.g. particles of different temperatures, binding states, or viscosities, contribute to the MPI measurement signal. In this work we specifically used this framework to adapt the qualitative multi-contrast temperature imaging methods [20] to a viscosity imaging scenario. Thereby, providing a potential blueprint for other multi-contrast applications, such as the quantification of particle binding fractions or temperature. Moreover, our framework provides a starting point for an in depth analysis of the capabilities and limitations of multi-contrast MPI and may help to compare different multi-contrast imaging and reconstruction approaches.

If we compare the algebraic multi-contrast reconstruction used here with the single-contrast algebraic reconstruction methods we find some noticeable differences. In MPI the gold standard to algebraic reconstruction involves the solution of a large system of equations, which is usually done using the Kaczmarz algorithm [34]. In single-contrast reconstructions one finds that only a few iteration of the Kaczmarz algorithm are required to end up with good approximations of the solution, since the rows of the corresponding system matrix are close to being orthogonal. Contrary, the corresponding multi-contrast system matrix seems to have lost this property and requires 10 000 iterations for good approximations of the solution. In our 2D excitation scenario 10 000 iterations are still feasible within a couple of seconds. For extensions from 2D to 3D excitations however, they could pose serious limitations, unless the additional information encoded within the receive signal leads to a significant speed up in the convergence of the Kaczmarz method. Therefore, it might be better to use different reconstruction approaches such as the row-based iterative approach [35] applied successfully for multi-contrast 3D imaging with as little as 10 iterations in [25]. Moreover, it should be worthwhile to develop reconstruction methods specifically tailored for the multi-contrast case.

Note that our current approach is still limited. Our experiments were limited to having one sample at a time centered within the field of view. Moreover, we disregarded the mapping of the concentration of the sample, where in an ideal scenario we would like to quantify both. Therefore, it would be interesting to investigate if our approach is able to handle an increased number of samples within the field of view, a randomized spatial positioning, varying viscosities and iron concentrations. These would be the last steps towards the application of multi-contrast MPI in in vivo experiments, where the viscosity of the blood is usually unknown. Using multi-contrast MPI it would be possible to quantify the true particle concentration and simultaneously determine the viscosity of blood and potentially even the temperature as an additional spectral parameter.

7. Conclusion

In this work we generalized the single-contrast imaging equation to account for a dynamic magnetic relaxation behavior, which cannot be described by a single fixed system function. Thereby, deepening our understanding of the effects that enable signal separation in multi-contrast MPI. Moreover, we provided the first quantitative study of the functional imaging capabilities of MPI with the viscosity as functional parameter, likely to be followed by different parameters like particle temperature or binding fraction. This might open a variety of new medical applications where for instance local temperature changes during inflammation could be detected. One other highly interesting application could be the detection of blood coagulation that happens during brain hemorrhage. Using the techniques developed in the present paper it will be possible not only to detect but also to quantify the amount of blood coagulation.

Acknowledgments

The authors thankfully acknowledge the financial support by the German Research Foundation (DFG, grant number KN 1108/2-1) and the Federal Ministry of Education and Research (BMBF, grant number 05M16GKA). Moreover, we would like to thank our anonymous reviewers for their help in improving this manuscript.

Please wait… references are loading.