517
Views
2
CrossRef citations to date
0
Altmetric
Research Article

Lanczos bidiagonalization-based inverse solution methods applied to electrical imaging of the heart by using reduced lead-sets: A simulation study

, & | (Reviewing Editor) & (Reviewing Editor)
Article: 1256461 | Received 06 May 2016, Accepted 21 Oct 2016, Published online: 08 Dec 2016

Abstract

In inverse problem of electrocardiography (ECG), electrical activity of the heart is estimated from body surface potential measurements. This electrical activity provides useful information about the state of the heart, thus it may help clinicians diagnose and treat heart diseases before they cause serious health problems. For practical application of the method, having fewer number of electrodes for data acquisition is an advantage. Additionally, inverse problem of ECG is ill-posed due to attenuation and smoothing within the body. Therefore, the solution of ECG inverse problem has to be regularized. In this study, we constrain ourselves to two Lanczos-bidiagonalization-based inverse solution methods, namely, Lanczos least-squares QR (L-LSQR) factorization and Lanczos truncated total least-squares (L-TTLS). Tikhonov regularization is also implemented as a base for comparison for these methods. We use body surface measurements simulated using epicardial potentials measured from the surface of canine hearts. In these experiments, the hearts are stimulated from the ventricles at various sites, mimicking ectopic beats. Torso potentials are obtained from these epicardial measurements by multiplying them with the forward transfer matrix and adding Gaussian distributed noise. We solve the inverse problem using different number of leads on the body surface (771, 192, 64, and 32 leads), and assess the performances of these regularization methods for the reduced lead-sets. These reduced lead-sets are selected from the primary 771-lead configuration by using two main approaches. The first approach is manually selecting appropriate leads, and the second one uses the inverse problem approach to select leads sequentially. The results show that the L-TTLS method is more successful in reconstructing epicardial potentials than the L-LSQR method. The L-TTLS method is faster than the Tikhonov regularization, since it benefits from bidiagonal form of the forward matrix. Reducing the number of electrodes to 64 has a small effect on the solutions, but with 32 leads, inverse solutions get less precise, and the difference between the results of Tikhonov regularization and L-TTLS method becomes less significant.

Public Interest Statement

In modern society there are increasing number of people suffering from heart diseases. The diagnosis of the heart problems in early stages is a very important step toward quick recovery. One way to achieve this is to understand the electrical activity of the heart by measuring the heart potentials reflected onto the body surface. Then, estimating the source of these potentials, which is the ultimate goal of inverse problem of electrocardiography, gives an idea about the electrical activity of the heart. Apparently, the more the electrodes to detect the electrical signals, the better the quality of the signal is. However, there are limitations for the locations and the number of electrodes to be used for this purpose. Here, the aim is to find an electrode configuration which uses a smaller number of electrodes and results in nearly the same signal quality measured by a large number of electrodes and to solve the inverse problem of electrocardiography faster and more precise.

1. Introduction

According to a World Health Organization (WHO) report, an estimated 17 million people die of cardiovascular diseases such as heart attack and stroke, around the world every year (WHO, Citation2014). Yearly growing number of patients around the world has motivated researchers to seek clinically practical non-invasive methods to attain detailed and precise information about the electrical activity of the heart. Although several cardiac abnormalities are diagnosable by the standard 12-lead ECG, many others are not detectable by this fixed lead configuration. Furthermore, 12-lead ECG suffers from sparse sampling in space. Alternatively, body surface potential mapping (BSPM) approach has been proposed, in which ECG signals are acquired from large number of electrodes densely placed on the torso surface. But still, these measurements also suffer from attenuation and smoothing that occur inside the body. One way to recover lost details on the body surface is to obtain actual electrical sources within the heart that generate the body surface measurements by solving the inverse ECG problem. This technique is also called as electrical imaging of the heart. The solution of the inverse ECG problem can help physicians diagnose various heart diseases and treat them properly before they turn into life threatening health issues. However, inverse ECG problem is ill-posed because of attenuation and smoothing of cardiac signals inside the body (Gulrajani, Citation1998; Ramanathan, Ghanem, Jia, Ryu, & Rudy, Citation2004). Thus, even small perturbations in the measurements or errors in the mathematical model relating the sources to the measurements can cause unbounded errors in the solution. To overcome this ill-posed nature of the problem and to obtain reasonable and meaningful cardiac electrical images, the solution has to be regularized.

Researchers have proposed several regularization and statistical estimation methods to overcome the ill-posedness of the inverse ECG problem. Tikhonov regularization (Tikhonov & Arsenin, Citation1977) and truncated singular value decomposition (TSVD) (Shou, Xia, & Jiang, Citation2007) are the most well-known methods for regularization. A modified version of Tikhonov regularization, which is called as the “Twomey” regularization, has not been as practical as Tikhonov regularization, since it needs prior information about the desired solution (Twomey, Citation1963). Truncated total least-squares (TTLS) is another method that has been shown to be effective, especially in the presence of geometric noise (Shou et al., Citation2008). However, as with TSVD, computation of singular values for a large matrix is time consuming. Alternatively, Lanczos-bidiagonalization-based TTLS method (L-TTLS) has been proposed in (Güçlü, Citation2013) to reduce the computational complexity and the run time. Another Lanczos-bidiagonalization-based method is Lanczos least-squares QR factorization (L-LSQR); Jiang, Xia, Shou, and Tang (Citation2007) compared the performances of conventional regularization methods, Tikhonov regularization and TSVD, with L-LSQR method. Most of the regularization methods employ L2-norm (Euclidian norm)-based approaches, both in the data and penalty terms to deal with the ill-posed nature of the inverse problem. L1-norm based solutions have also been proposed to overcome over-smoothing effects present in the L2-norm based solutions (Ghosh & Rudy, Citation2009; Shou, Xia, Liu, Jiang, & Crozier, Citation2011; Wang, Qin, Wong, & Heng, Citation2011). Bayesian Maximum a posteriori (MAP) estimation (Serinagaoglu, Brooks, & MacLeod, Citation2006; van Oosterom, Citation1999), Kalman filters and smoothers (Aydin & Dogrusoz, Citation2011; Ghodrati, Brooks, Tadmor, & MacLeod, Citation2006) are some of the statistical approaches that incorporate prior information on the solutions in the form of a prior probability density function.

For cardiac electrical imaging to be practical in clinical applications, it is important to use small number of electrodes (Ghodrati, Brooks, & MacLeod, Citation2007). But at the same time, data acquired via these electrodes should have effective coverage of the changes in the electrical potentials that are reflected onto the body surface. Another point to keep in mind is that smaller number of electrodes usually result in an under-determined system. In that case, much of the burden of reconstructing cardiac electrical image is on the regularization algorithms.

In this paper, our main goal is to study the effects of employing a smaller number of leads (electrodes) for recording the body surface potentials, on the inverse ECG solutions. We propose a simple lead-set reduction algorithm based on inverse solutions, and we compare the performances of these reduced lead-sets with the performances of the manually selected lead-sets and the complete lead-set. We have applied Lanczos-bidiagonalization-based methods, L-TTLS and L-LSQR in this study to take advantage of their reduced computational complexity and fast computation. L-LSQR has been applied previously to solve the inverse ECG problem (Jiang et al., Citation2007), and L-TTLS has been proposed in an earlier study by one of our research group (Güçlü, Citation2013) to reduce the computational complexity and the run time. However, to the best of our knowledge, neither method has been assessed elsewhere in terms of their performances with reduced lead-sets. Here we compare performances of these methods in reconstructing real heart potentials with Tikhonov regularization. Results are quantitatively compared by calculating correlation coefficient (CC) and relative difference measurement star (RDMS). Qualitative comparisons are also carried out by plotting the heart surface potential distributions using MAP3D visualization software, which provides an interactive display of both geometry and data assigned to elements of that geometry.

2. Problem definition

In cardiac electrical imaging, electrical sources in the heart may be represented in various equivalent sources. In this study, we use epicardial potentials, which are the potentials on the outer surface of the heart. Epicardial potentials are related to body surface potentials by the following linear system:

(1) B=AX+N,(1)

where BRm×p and XRn×p are the matrices that contain body surface potentials and epicardial potentials, respectively. The matrix ARm×n is the forward transfer matrix, which is the result of the solution of the forward problem of ECG, and the matrix NRm×p is used to model measurement errors. For vector notation we can redefine the above equation as:(2) bt=Axt+nt(t=1,2,,p),(2)

where t represents each time instant, and b(t), x(t), and n(t) correspond to the tth column of matrices B, X, and N, respectively. The solution to Equation (2) is found separately for every t. For simplicity, we drop the time index from the equations in the following sections.

3. Regularization methods

3.1. Tikhonov regularization

Tikhonov regularization method is one of the most well-known and popular regularization methods to deal with the ill-posed nature of the inverse ECG problem (Golub & van Loan, Citation1980). In this method, a cost function consisting of the residual norm and the constraint norm is defined, and the solution is chosen to minimize this cost function (Aster, Borcher, & Thurber, Citation2005):(3) xλ=argminxAx-b22+λ2Rx22,(3)

where λ is the regularization parameter, xλ is the solution for a specific λ value, and R is the regularization matrix representing the constraints to be used for regularization. Here, .2 stands for the Euclidean norm, and we use R=I (identity matrix, i.e. zeroth-order regularization), because it is more suitable for inverse problem of ECG (Franzone, Guerri, Taccardi, & Viganotti, Citation1985).

Two alternative representations of Tikhonov regularization are presented below:(4) ATA+λ2RTRx=ATb,(4) (5) minAλRx-b02(5)

Considering the above two equations, it can be inferred that if the null space of A intersects with the null space of R (i.e. NANR={0}), then there would be a unique solution, xest, then the coefficient matrix, ATA+λ2RTR, has full rank, and the solution is calculated as:(6) xest=Aλb,(6)

where(7) Aλ=(ATA+λ2RTR)-1AT(7)

is the Tikhonov regularized inverse of matrix A. As in many regularization methods, large singular values are more effective than smaller ones; therefore, the determination of the regularization parameter λ is important.

3.2. Lanczos least-squares QR (L-LSQR)

Lanczos least-squares QR factorization (L-LSQR) is an iterative method to solve linear systems. When the coefficient matrix is large, iterative methods are more preferable than direct solutions. L-LSQR method iteratively produces solution matrices. After k iterations, the solution will approach an optimal solution. However, this method suffers from a phenomenon called as “semi-convergence”. If the number of iterations is not limited, the solution may converge to a worse solution with higher relative error (Jiang et al., Citation2007).

L-LSQR method starts with finding the sequence of Lanczos vectors using Lanczos-bidiagonalization method. Lanczos-bidiagonalization computes ujRm, vjRn and scalars αj and βj such that Bk = UTAV is met. Here, Bk is a lower bidiagonal matrix, with 0 representing triangular 0-matrices:(8) Bk=α10β1α2β2αk0βk+1.(8)

Lanczos vectors are orthonormal such that:Uk+1=(u1,u2,,uk+1)Rm×(k+1),Uk+1TUk+1=Ik+1,

andVk=(v1,v2,,vk)Rm×k,VkTVk=Ik.

3.2.1. Lanczos-bidiagonalization

1.

Choose a starting vector bRm and β1=b2, u1=bβ1, v0 = 0 and α1.

2.

For i=1,2,,k do

a.

ri=ATui-βivi-1

b.

αi=ri2

c.

vi=riαi

d.

Pi=Avi-αiui

e.

βi+1=Pi2

f.

ui+1=Piβi+1

End

After k iterations, three matrices will have been computed, a lower bidiagonal matrix Bk and two matrices Uk+1 and Vk. These matrices are related by the following relationships:(9) b=β1u1=β1Uk+1e1,AVk=Uk+1Bk,ATUk+1=VkBkT+αk+1vk+1ek+1T(9)

where ei represents the ith unit vector. Now, the calculated quantities by Lanczos-bidiagonalization algorithm can be used to solve the least-squares problem:(10) minAx-b2.(10)

Here, the solution has the form:(11) x(k)=vky(k),(11)

where the length of the vector y(k) is k. Then, r(k) = b − Ax is defined and by substitution we have:(12) r(k)=β1u1-AVkyk=Uk+1β1e1-Bkyk.(12)

Let us define tk+1=β1u1-Bky(k). Since Uk+1 has orthonormal columns, it can be concluded that yk should be chosen so that it minimizes tk+12. Thus, the least-squares problem changes to:(13) minβ1e1-Bky(k)2.(13)

By standard QR factorization of Equation (13) we have:(14) QkBkβ1e1=RkfkgkTφ~k+1=ρ1θ2φ1ρ2θ30φ20ρk-1θkφk-1ρkφkgk1gk2gkkφk+1.(14)

Then, y(k) and tk+1 can be calculated as:(15) fk=Rky(k),tk+1=QkT0φ~k+1.(15)

Finally, by combining Equations (13) and (15) we have:(16) x(k)=vkRk-1fk=Dkfk.(16)

As it was stated before, L-LSQR is a semi-convergence method, which by further iteration may diverge from optimal solution. Therefore, defining the bound for k appropriately is an important issue.

3.3. Lanczos truncated total least-squares (L-TTLS)

When Lanczos-bidiagonalization method is combined and used along with TTLS, we call it as the Lanczos TTLS (L-TTLS) method. In L-TTLS algorithm (Morigi & Sgallari, Citation2001), starting with an initial vector, u1=b/b2, two sets of matrices Vk and Uk, and the (k+1)×k bidiagonal matrix, Bk, are produced after k iterations such that:

Vk = {v1v2, …, vk}, and Uk = {u1u2, …, uk+1}.

These matrices are related to each other by the subsequent equations:(17) AVk=UkBkandb=β1u1(17)

After k iterations, if k is large enough to include all singular values of the matrix A, the TLS problem can be projected into subspaces spanned by Vk and Uk.

The final form of the problem will be:(18) minBk,β1e1-B^k,e^(k)F,subjecttoB^ky=e^(k),(18)

where .F denotes the Frobenius norm. Then TLS method is applied on a small sized matrix, which is the result of Lanczos-bidiagonalization process, in order to create the truncated TLS solution, namely, the TTLS solution. To calculate TTLS solution, SVD is applied on the matrix (Bk=β1u1):

(19) (Bkβ1u1)=U¯¯kΣ¯¯kV¯¯kT.(19)

The matrix V¯¯(k) is partitioned as noted below:(20) V¯¯=V¯¯11V¯¯12V¯¯21V¯¯22,(20)

where V¯¯11R(k-1)×(k-1), V¯¯12R(k-1)×1, V¯¯21R1×(k-1), and V¯¯22R1×1.

Then the standard TLS solution can be defined as:(21) y¯k=-V¯¯12kv¯¯22(k)-1.(21)

Finally, the solution is calculated as:(22) x~=VkV¯¯12kv¯¯22(k)-1.(22)

3.4. Regularization parameter selection method

The regularization parameters in this study are the λ parameter in Tikhonov regularization method and the truncation number k in L-LSQR and L-TTLS methods. In this study, Maximum Correlation Coefficient (MCC) method is used as a regularization parameter selection method, in which the corresponding parameter is selected so that the CC of the solution with the true potentials is maximized.

4. Lead reduction for body surface potential mapping (BSPM)

Inverse ECG has the potential to be a strong tool for clinical diagnosis of various heart diseases. However, for practical purposes, the number of attached electrodes on the body surface should be as small as possible considering their optimal configuration to acquire information as much as possible. Toward this end, numerous studies are conducted under “Lead Reduction” topic, each of which aims to reduce the number of attached electrodes to the body surface in a way that informative potential distribution on the torso surface would still be accurately detectable (Barr, Spach, & Herman-Giddens, Citation1971; Donnelly, Finlay, Nugent, & Black, Citation2008; Finlay et al., Citation2006; Lux, Smith, Wyatt, & Abildskov, Citation1978; Lux et al., Citation1979). The common aim in all these studies is to use a smaller number of electrodes than the nodes in the associated geometry model.

Beside the desired number of leads, the location of them on the surface of the body is of a great importance. For instance, the electrical activity of the heart is more detectable in the frontal region of the torso (Lux et al., Citation1978). Several lead-sets are used in this study, and the effects of reducing the number of leads are evaluated.

We start with a complete lead-set consisting of 771 electrodes (Figure (a)), then we reduce these leads to 192 leads as presented in (Lux et al., Citation1978) (Figure (b)). These 192 leads are from equally spaced electrodes (a 12 × 16 1electrode array) that cover the upper part of the body.

Figure 1. (a) 771 complete lead-set and (b) 192 lead-set.

Figure 1. (a) 771 complete lead-set and (b) 192 lead-set.

To relate the reduced lead-set problems to the original problem in Equation (1), there should be a row-removal process which can be done by pre-multiplication of the forward matrix, A, by a selection matrix, S, which contains “1” in each row at the column number corresponding to the desired row of matrix A that will be selected, and zero elsewhere:(23) As=SA.(23)

Therefore, the corresponding 192, 64, and 32 rows of selected configurations are extracted from the original forward matrix, A771×490, by removing the undesired rows. The same process should be applied to the data matrix, B, to obtain a reduced data matrix corresponding to the new reduced forward matrix (Ghodrati et al., Citation2007):(24) Bs=SB.(24)

In this study, two approaches are considered to select 64 and 32 electrodes out of a larger number of electrodes. In the first approach, the desired number and the locations of the reduced-leads are manually selected from a larger lead set. We have two major selection criteria in this approach: the first one is to select electrodes only from the frontal region of the torso. The second, and the fundamental one is to select electrodes more densely between potential extrema on the torso (blue and red regions in Figure (a)), since this region contains significant information about the electrical activity of the heart and better represents the waveform pattern compared to other regions. The resulting lead-sets are shown in Figure , and referred to as 64 lead-set-I and 32-lead-set-I, respectively.

Figure 2. (a) 64-lead-set-I and (b) 32-lead-set-I.

Note: In this approach, reduced leads are manually selected from the frontal surface of the body.
Figure 2. (a) 64-lead-set-I and (b) 32-lead-set-I.

The second way for selecting reduced lead-sets employs an inverse problem solution approach, which results in 64 lead-set-II and 32 lead-set-II. In this approach, instead of the 771 complete lead-set, 192 lead-set configuration proposed by Lux et al. (Citation1978) is considered as the primary lead-set, out of which we select 64 and 32 leads automatically. Thus, we start with a forward transfer matrix, A, of size 192 × 490, and a data matrix B of size 192 × 109, which contains the torso surface measurements. The process of selecting the optimal 64 or 32 leads out of these 192 leads is based on selecting only one optimal lead per iteration. In each iteration, the lead whose acquired signal gives the best inverse solution to the whole system is selected. To solve the related inverse problem, Tikhonov regularization along with maximum CC as regularization parameter selection method is used. In other words, in this method the lead selection process is addressed as a criterion of how well each individual electrode is able to reconstruct the epicardial potentials.

A flowchart for the reduced lead set selection steps is presented in Figure . At the first iteration step, the inverse problem is solved for each body surface potential measurement separately. The lead that yields the maximum CC in the inverse solution is then selected as the first lead for the reduced lead-set. At the second iteration step, each body surface lead (except the one that was already selected) is appended to the previously selected lead in sequence, and the combination leads (consisting of 2 leads) that yield the maximum CC in the solution compose the first two leads for the reduced lead-set. This iteration of appending one lead next to the previously selected leads and solving the inverse problem for maximum CC value is repeated until the desired number of leads are achieved. These reduced lead-sets are referred to as 64-lead-set-II and 32-lead-set-II, and their configurations are shown in Figure . Note that this is an automatic lead selection algorithm, which may yield frontal and back lead distributions.

Figure 3. Main steps of lead-selection approach.

Figure 3. Main steps of lead-selection approach.

Figure 4. (a) 64-lead-set-II front view, (b) 64-lead-set-II back view, (c) 32-lead-set-II front view, and (d) 32-lead-set-II back view.

Figure 4. (a) 64-lead-set-II front view, (b) 64-lead-set-II back view, (c) 32-lead-set-II front view, and (d) 32-lead-set-II back view.

5. Results

The epicardial potentials used for this study were measured at University of Utah Nora Eccles Harrison Cardiovascular Research and Training Institute (CVRTI) (MacLeod, Lux, & Taccardi, Citation1998). These epicardial measurements were taken from 490 points over the heart surface (epicardial surface) with a sampling rate of 1,000 samples per second and the forward transfer matrix is used to simulate 771 body surface potentials from epicardial potential measurements. We simulated the body surface potentials at 30 dB Signal to Noise Ratio (SNR). For this study, data from a single animal are included.

The goal here can be summarized in two parts:

(1)

We know from literature that reduced lead-sets also yield high-quality inverse solutions compared to larger data-sets, if an appropriate lead number and a suitable distribution can be selected. Here, we aim to study and compare the performances of two reduced lead-set selection algorithms, with each other and with the solutions that are based on the complete lead-set.

(2)

There are various regularization algorithms in literature used for solving the inverse ECG problem. Here we aim to compare two Lanczos bidiagonalization-based methods in terms of their performances when reduced lead-sets are used.

Correlation Coefficient (CC) and Relative Difference Measurement Star (RDMS) metrics are used as two quantitative criteria to compare the results of the three regularization algorithms with the true (experimentally obtained) epicardial potentials. Smaller values of RDMS indicate a higher quality of the answer, i.e. the regularized solution is closer to the real solution.

Here, the inverse problem is solved at all time instants using the three regularization methods for each lead-set combination, including the case where we include all 771 leads. Then, CC and RDMS values are calculated by comparing the true epicardial potentials with the reconstructed ones over the heart surface at each time instant. Table includes averages and standard deviations over time of the calculated CC and RDMS values for different regularization methods and lead-set selections. It can be inferred that the L-TTLS method is able to reconstruct epicardial potentials better than the L-LSQR method. As the number of leads decreases, considering the average and standard deviation values of CC, the difference between the results of Tikhonov regularization and L-TTLS method becomes smaller. This indicates that by reducing the number of measurement sites on the torso, L-TTLS method acts more robust than the L-LSQR method. Clearly, 64 lead-set-II and 32-lead-set-II perform better than 64 lead-set-I and 32-lead-set-I, respectively. This is due to the process of selecting leads for 64 lead-set-II and 32-lead-set-II which is significantly more accurate than the manual selection scheme used for 64-lead-set-I and 32-lead-set-I cases. Similarly, by considering the values in the last column of Table , the average and standard deviation values of RDMS values for different regularization methods, it can be concluded that although the amount of RDMS values increase by reducing the number of measuring sites, there is no drastic change in these values. Therefore, it can be concluded that to reconstruct epicardial potentials, a large number of electrodes is unnecessary and similar results can be obtained using a small number of leads in an optimal configuration.

Table 1. Average and standard deviation values of CC and RDMS of different regularization methods for different lead-sets

In order to assess the quality of the inverse solutions using reduced lead sets, we also plot the epicardial potential distributions (iso-potential maps) over the heart surface at a single time instant using the MAP3D visualization software (CIBC, Citation2016), represented in Figure . In this figure, the top row shows the true potential distribution at a single time instant and the remaining plots show solutions obtained by different methods and lead-sets. In these maps, the blue regions are the depolarized parts of the heart and the red regions correspond to tissue at rest. This figure shows that, the contours of the real epicardial potentials and 64 lead-set-II and 32-lead-set-II are very similar, indicating that the quality of the solutions of this reduced lead-set are significantly high. By looking at the wave-front form, again it can be concluded that the regularization results of 64 lead-set-II and 32-lead-set-II, especially for the L-TTLS method, have a higher quality than 64 lead-set-I and 32-lead-set-I. This validates that optimally selected 64 lead-set-II and 32-lead-set-II have the ability to reconstruct epicardial potentials better than 64 lead-set-I and 32-lead-set-I.

Figure 5. MAP3D results at t = 53 ms for Tikhonov regularization, L-LSQR, and L-TTLS methods. Panel (a) show true epicardial potential distribution. The remaining panels (in groups of 3 images) correspond to solutions with (b) 771 complete lead-set, (c) 192 lead-set, (d) 64 lead-set-I, (e) 64 lead-set-II, (f) 32 lead-set-I, and (g) 32 lead-set-II.

Figure 5. MAP3D results at t = 53 ms for Tikhonov regularization, L-LSQR, and L-TTLS methods. Panel (a) show true epicardial potential distribution. The remaining panels (in groups of 3 images) correspond to solutions with (b) 771 complete lead-set, (c) 192 lead-set, (d) 64 lead-set-I, (e) 64 lead-set-II, (f) 32 lead-set-I, and (g) 32 lead-set-II.

6. Conclusion

In this study, three regularization methods, namely, Tikhonov regularization, L-LSQR and L-TTLS, are employed to reconstruct potential distributions on the epicardial surface by solving the inverse problem of ECG. These regularization methods are applied to data corresponding to complete and reduced lead-sets. To compare the results of these regularization methods, average and standard deviation values of correlation coefficient (CC) and relative difference measurement star (RDMS) are calculated. Our results show that L-TTLS method performs better than L-LSQR. By assuming the results of Tikhonov regularization as a baseline, it can be concluded that as the number of measurement leads decreases, the difference between the reconstructed potentials using Tikhonov regularization and L-TTLS reduces. So it can be inferred that L-TTLS method is more robust and accurate than Tikhonov regularization method when the number of measurement sites is reduced. In fact, it would be beneficial to use methods that employ Lanczos-bidiagonalization as part of the regularization process to speed up the execution time. The short runtime is an important issue when dealing with big data matrices to solve ECG inverse problems (for example, inverse problems in terms of transmembrane potentials).

In many studies, in order to obtain electrical activity of the heart, a large number of leads are used. However, according to the results obtained in this study, there is no need to use a large number of leads to acquire signals on the surface of the body, and a small number of leads optimally located on the surface of the torso will be enough to estimate the potential distribution on the surface of the heart. Therefore, by employing a small number of measurement leads, electrical sources on the heart surface and correspondingly pathologies of the heart can be diagnosed and treated, since the inverse solution of ECG provides spatial and temporal information about the electrical activity of the heart.

As future work, we will apply the proposed lead-selection method in this study on a wider range of data with varying pacing sites on the epicardial surface. In this way, it would be possible to understand how the reduced lead-sets are affected from different propagation patterns of potentials on the heart, and to develop and introduce a generalized lead-set that produces good solutions for these different data.

Funding

This project (i.e. MAP3D) was supported by the National Institute of General Medical Sciences of the National Institutes of Health [grant number P41 GM103545-18].

Acknowledgments

The authors would like to thank Dr Robert S. Macleod from University of Utah (CVRTI and SCI Institute) for the data used in this study. This work was made possible in part by the MAP3D software.

Additional information

Notes on contributors

Fourough Gharbalchi

Yesim Serinagaoglu Dogrusoz is an associate professor of Electrical and Electronics Engineering at Middle East Technical University, and an affiliated member of the Institute of Applied Mathematics and Biomedical Engineering Graduate Program. Her research interests include forward/inverse problems in electrocardiography, cardiac electrical modeling, and electrocardiography signal processing.

Yesim Serinagaoglu Dogrusoz

Fourough Gharbalchi is currently a PhD student in biomedical engineering in the Middle East Technical University, Ankara, Turkey. Her research interests are statistical signal processing with particular focus on biomedical applications, forward and inverse problems of electrocardiography, modeling the electrical activity of the heart, video and image processing using Deep-Learning approach, and open-source software systems for these applications.

Gerhard Wilhelm Weber

Gerhard Wilhelm Weber is a professor at IAM, METU, Ankara, with research on OR, finance, optimization, data-mining, life-sciences. He received Diploma/Doctorate at RWTH Aachen, Habilitation at TU Darmstadt and held short professorships in Cologne/Chemnitz, and is “EURO conference-advisor” and chair of IFORS OR for Developing-Countries online-resources.

References

  • Aster, R. C., Borcher, B., & Thurber, C. (2005). Parameter estimation and inverse problems. Amsterdam: Academic Press.
  • Aydin, U., & Dogrusoz, Y. S. (2011). A Kalman filter-based approach to reduce the effects of geometric errors and the measurement noise in the inverse ECG problem. Medical & Biological Engineering & Computing, 49, 1003–1013. doi:10.1007/s11517-011-0757-8
  • Barr, R. C., Spach, M. S., & Herman-Giddens, G. (1971). Selection of the number and positions of measuring locations for electrocardiography. IEEE Transactions on Biomedical Engineering, BME-18, 125–138.10.1109/TBME.1971.4502813
  • CIBC. (2016). MAP3D: Interactive scientific visualization tool for bioengineering data. Scientific Computing and Imaging Institute (SCI). Retrieved from http://www.sci.utah.edu/cibc/software.html
  • Donnelly, M. P., Finlay, D. D., Nugent, C. D., & Black, N. D. (2008). Lead selection: Old and new methods for locating the most electrocardiogram information. Journal of Electrocardiology, 41, 257–263.10.1016/j.jelectrocard.2008.02.004
  • Finlay, D. D., Nugent, C. D., Donnelly, M. P., Lux, R. L., McCullagh, P. J., & Black, N. D. (2006). Selection of optimal recording sites for limited lead body surface potential mapping: A sequential selection based approach. BMC Medical Informatics and Decision Making, 6, 9–17.10.1186/1472-6947-6-9
  • Franzone, P. C., Guerri, L., Taccardi, B., & Viganotti, C. (1985). Finite element approximation of regularized solutions of the inverse potential problem of electrocardiography and applications to experimental data. Calcolo, 22, 91–186.10.1007/BF02576202
  • Ghodrati, A., Brooks, D. H., & MacLeod, R. S. (2007). Methods of Solving Reduced Lead Systems for Inverse Electrocardiography. IEEE Transactions on Biomedical Engineering, 54, 339–343. doi:10.1109/TBME.2006.886865
  • Ghodrati, A., Brooks, D. H., Tadmor, G., & MacLeod, R. S. (2006). Wavefront-based models for inverse electrocardiography. IEEE Transactions on Biomedical Engineering, 53, 1821–1831.10.1109/TBME.2006.878117
  • Ghosh, S., & Rudy, Y. (2009). Application of L1-norm regularization to epicardial potential solution of the inverse electrocardiography problem. Annals of Biomedical Engineering, 37, 902–912.10.1007/s10439-009-9665-6
  • Golub, G. H., & van Loan, C. (1980). An analysis of the total least-squares problem. SIAM Journal on Numerical Analysis, 17, 883–893.10.1137/0717073
  • Güçlü, A. (2013). Comparison of five regularization methods for the solution of inverse electrocardiography problem (MSc Thesis). Middle East Technical University, Ankara.
  • Gulrajani, R. M. (1998). The forward and inverse problems of electrocardiography. IEEE Engineering in Medicine and Biology Magazine, 17, 84–101, 122.10.1109/51.715491
  • Jiang, M., Xia, L., Shou, G., & Tang, M. (2007). Combination of the LSQR method and a genetic algorithm for solving the electrocardiography inverse problem. Physics in Medicine and Biology, 52, 1277–1294.10.1088/0031-9155/52/5/005
  • Lux, R. L., Burgess, M. J., Wyatt, R. F., Evans, A. K., Vincent, G. M., & Abildskov, J. A. (1979). Clinically practical lead systems for improved electrocardiography: Comparison with precordial grids and conventional lead systems. Circulation, 59, 356–363.10.1161/01.CIR.59.2.356
  • Lux, R. L., Smith, C. R., Wyatt, R. F., & Abildskov, J. A. (1978). Limited lead selection for estimation of body surface potential maps in electrocardiography. IEEE Transactions on Biomedical Engineering, BME-25, 270–276.10.1109/TBME.1978.326332
  • MacLeod, R. S., Lux, R. L., & Taccardi, B. (1998). A possible mechanism for electrocardiographically silent changes in cardiac repolarization. Journal of Electrocardiology, 30, 114–121.10.1016/S0022-0736(98)80053-8
  • Morigi, S., & Sgallari, F. (2001). A regularizing L-curve Lanczos method for underdetermined linear systems. Applied Mathematics and Computation, 121, 55–73.10.1016/S0096-3003(99)00262-3
  • Ramanathan, C., Ghanem, R. N., Jia, P., Ryu, K., & Rudy, Y. (2004, April). Noninvasive electrocardiographic imaging for cardiac electrophysiology and arrhythmia. Nature Medicine, 10, 422–428.10.1038/nm1011
  • Serinagaoglu, Y., Brooks, D. H., & MacLeod, R. S. (2006, October). Improved performance of bayesian solutions for inverse electrocardiography using multiple information sources. IEEE Transactions on Biomedical Engineering, 53, 2024–2034.10.1109/TBME.2006.881776
  • Shou, G., Xia, L., & Jiang, M. (2007). Solving the electrocardiography inverse problem by using an optimal algorithm based on the total least-squares theory. Third International Conference on Natural Computation, IEEE, 5, 115–119.10.1109/ICNC.2007.674
  • Shou, G., Xia, L., Jiang, M., Wei, Q., Liu, F., & Crozier, S. (2008). Truncated total least squares: A new regularization method for the solution of ecg inverse problems. IEEE Transactions on Biomedical Engineering, 55, 1327–1335.10.1109/TBME.2007.912404
  • Shou, G., Xia, L., Liu, F., Jiang, M., & Crozier, S. (2011). On epicardial potential reconstruction using regularization schemes with the L1-norm data term. Physics in Medicine and Biology, 56, 57–72.10.1088/0031-9155/56/1/004
  • Tikhonov, A. N., & Arsenin, V. Y. (1977). Solution of ill-posed problems. Washington, DC: Winston and Sons.
  • Twomey, S. (1963). On the numerical solution of Fredholm integral equations of the first kind by the inversion of the linear system produced by quadrature. Journal of the ACM, 10, 97–101.10.1145/321150.321157
  • van Oosterom, A. (1999). The use of the spatial covariance in computing pericardial potentials. IEEE Transactions on Biomedical Engineering, 46, 778–787.10.1109/10.771187
  • Wang, L., Qin, J., Wong, T. T., & Heng, P. A. (2011). Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection. Physics in Medicine and Biology, 56, 6291–6310.10.1088/0031-9155/56/19/009
  • World Health Organization. (2014). Retrieved December 12, 2014, from http://www.who.int/cardiovascular_diseases/resources/atlas/en