Chapter 11

Cloud-based cardiac health monitoring using event-driven ECG processing and ensemble classification techniques


Published Copyright © IOP Publishing Ltd 2020
Pages 11-1 to 11-24

Download ePub chapter

You need an eReader or compatible software to experience the benefits of the ePub3 file format.

Download complete PDF book, the ePub book or the Kindle book

Export citation and abstract

BibTeX RIS

Share this chapter

978-0-7503-3279-8

Abstract

In this chapter, a method for the monitoring of cardiovascular health is presented. It processes and classifies the electrocardiogram (ECG) signals to realize an automated cloud-based diagnosis of chronic heart disease. To achieve real-time compression and efficient signal processing and transmission, event-driven ECG signal acquisition is utilized. The experimental results reveal that a substantial gain in compression and usage of bandwidth is achieved by the designed solution, in contrast to the classical counterpart, due to the event-driven nature. It results in a significant reduction of the computational complexities of the post-denoising, feature extraction and classification.

This article is available under the terms of the IOP-Standard Books License

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publisher, or as expressly permitted by law or under terms agreed with the appropriate rights organization. Multiple copying is permitted in accordance with the terms of licences issued by the Copyright Licensing Agency, the Copyright Clearance Centre and other reproduction rights organizations.

Permission to make use of IOP Publishing content other than as set out above may be sought at permissions@ioppublishing.org.

Varun Bajaj and G R Sinha have asserted their right to be identified as the authors of this work in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988.

Electrocardiogram (ECG) signals have been widely utilized to detect heart arrhythmias. Cloud-based health monitoring is beneficial in tracking, avoiding and treating these abnormalities. In this chapter, a method for the monitoring of cardiovascular health is presented. It processes and classifies the electrocardiogram (ECG) signals to realize an automated cloud-based diagnosis of chronic heart disease. To achieve real-time compression and efficient signal processing and transmission, event-driven ECG signal acquisition is utilized. The experimental results reveal that a substantial gain in compression and usage of bandwidth is achieved by the designed solution, in contrast to the classical counterpart, due to the event-driven nature. It results in a significant reduction of the computational complexities of the post-denoising, feature extraction and classification. The denoised signal features are extracted by employing the autoregressive (AR) Burg method. Ensemble classifiers are then used to classify the different types of ECG signals. The precision of the system is studied regarding the performance of the classification.

11.1. Introduction

Recent developments in the smart device industry have gradually increased the deployment of biomedical implants to provide users with a virtual healthcare framework. The role of wearable devices in monitoring biomedical signals is to regulate the smart sensor data. Over the last decade, clinically related wearable devices have included instruments such as stethoscopes, blood pressure monitors, Holter ECG readers and portable EEG systems. While this trend still persists today, wearable design firms have also developed revolutionary tools and techniques to capture and analyse body signals. Thus biomedical signals obtained from wearable sensors enable patients with chronic diseases to be remotely monitored continuously and in real time. In addition, several of these biomedical signals have clinical significance in monitoring the stage of the condition or disease [1].

Advanced and sophisticated implants are also utilized for processing biomedical signals. This is particularly popular in the case of contemporary hospitals and homes for the elderly. Such implants augment the precision and accelerate the procedure of diagnosis and decision making. New biomedical implants offer the future potential of using wireless sensing methods for remote cardiovascular system healthcare monitoring. This is particularly beneficial for the elderly and seriously ill cardiac patients [2]. The effective realization and embedding of wireless sensing technology has attracted various fields, for example biomedical implant based fabric design and computer based health diagnosis [3].

Decreased fertility and increased lifespan have led to a high elderly population ratio in developed countries. This has subsequently increased medical needs. Mobile health monitoring, diagnosis and treatment offer the possibility of making medical care cheaper and thus easily affordable. It will be possible to obtain quality medical treatment without the need to go to hospitals or healthcare centers. There is also a significant population who cannot take appropriate care of their health because of transportation issues and limitations of mobility. There might be a lack of transportation and/or healthcare infrastructure. This is particularly valid for developing countries with limited health budgets and is also true for rural and remote communities. Their level of healthcare can be improved tremendously by connecting them with medical specialists virtually. This can be realized in this modern technological era by smartly combining information and wireless communication technologies. Such a development will be highly beneficial for rural and remote communities in general and for the developing world in particular. In this context, promising studies are ongoing to develop a cloud-based healthcare infrastructure, that is capable of gathering and managing the data of potential patients [4].

Recently, there have been many advances in surgical devices and clinical technology to meet the needs of existing diagnosis and treatment in the healthcare field. These modern healthcare tools and devices provide precise and accelerated health analysis. Additionally, thanks to wireless sensing technology and contemporary networking strategies, it is becoming technically convenient to realize the real-time continuous observation of elderly people and patients with critical health conditions.

The continuous and live surveillance of patients and elderly people, particularly those with chronic illnesses, is essential [4]. Cloud-based mobile healthcare monitoring devices play a key role in tracking and analysing the patient condition. Real-time patient monitoring systems gather people's biomedical signals on an ongoing basis to examine the risks associated with vital health conditions in real time [5]. Thanks to the integration of cloud servers and sensing devices, sensor-based mobile patient monitoring has become increasingly common in personalized healthcare frameworks in recent years [6].

Figure 11.1 presents a cloud-based patient tracking framework for medical diagnosis and care. In this framework, mobile devices constantly gather biomedical signals from different intelligent wireless wearables, conditioning the signals and transmitting them to a cloud-based application. The outcomes are transmitted to the relevant health centers. Notifications are sent in the case of emergency to alert the emergency services, cardiologists and the relatives of the patient [6].

Figure 11.1.

Figure 11.1. A framework for cloud-based cardiovascular disease monitoring.

Standard image High-resolution image

In order to analyse biomedical signals for signal recognition, numerous signal processing and machine learning algorithms can be employed. Wang et al [7] utilized a field programmable gate array (FPGA) based tactic for cloud-based ECG signal analysis. In [8] an ECG monitoring device is utilized to conduct machine intelligence-based ECG signal analysis and send the findings to the cloud. Cloud-based methods for the detection of coronary heart failure have become common to track patients with heart disease [8].

Remote healthcare management services based on cloud servers are supported by smart wireless wearables, personal digital assistants and cloud computing [7, 8]. They involve the use of services offered by cellular network providers, the Global Positioning System (GPS) and general packet radio services (GPRSs). Sensors can acquire biomedical signals on the basis of these capabilities but cannot store, interpret or transmit these signals without using a mobile network such as 5G [9].

Thanks to the remarkable advances in intelligent biomedical wireless implants, remote and mobile processing modules, and wireless communication networks, the latest developments of cloud-based mobile patient tracking services have become possible. There is a large number of accurate, wireless and smart sensors for collecting different health-related biomedical data. These sensors can easily be incorporated into a smartphone or smartwatch. These smart devices can contribute crucially and significantly to the realization of contemporary and efficient cloud-based mobile healthcare solutions [9]. An effective wireless implant records the vital health condition signals. These recordings are made directly from the relevant patient. Then, the implant preconditions, treats and transmits the conditioned vital health information to the smartphone. Then the smartphone sends the collected data to the cloud via a wireless cellular network to verify the patient's status continuously. In the case of an emergency, the cloud service application sends the details to the emergency services, clinician and the patient's family [10].

This chapter introduces an original event-based architecture for smart acquisition and processing of ECG signals via front-end wireless wearables connected to a cloud-based application in an arrhythmia detection system. The goal is to increase device performance and the wearable power efficiency by achieving real-time compression, computational effectiveness and reduction of transmission activity, while ensuring adequate classification accuracy. The method is promising and leads to an effective ECG wearable implementation along with precise decision support for arrhythmia identification [1113].

11.2. Background and literature review

Heart disease is one of the main causes of death worldwide [14]. Ventricular arrhythmias cause irregular heartbeats and are the cause of nearly 80% of sudden heart failures. The electrocardiogram (ECG) is a complicated signal and provides essential information about cardiovascular system functionality [15]. Interference and physiological artifacts alter the ECG signal by decreasing the effectiveness of post-processing, such as feature extraction and classification. To overcome these deficiencies, numerous signal processing techniques have been proposed [15]. Several techniques have been implemented for the preprocessing of ECG signals, such as quadratic filtering [16], Kalman filtering [17] and principal component analysis [18]. The ECG signals are processed and analysed using effective methods of signal processing to accurately classify cardiac diseases [15, 19]. The short time Fourier transform (STFT), discrete wavelet transformation (DWT), wavelet packet decomposition (WPD) and autoregressive (AR) Burg method are the ECG feature extraction tools commonly used [15, 20]. Features, obtained with the extraction process, are used for automatic diagnosis of cardiac arrhythmia. Numerous data mining techniques are utilized for the classification of ECG signals [20]. Some key ECG classification approaches are the neural networks [20], support vector machines (SVMs) [15], exhaustive k-means clustering [21], k-nearest neighbors (k-NN) [20] and fuzzy logic [15].

The successful treatment of heart failure can be achieved with a prompt diagnosis of arrhythmia disorders. Therefore, patients with chronic heart disorders need constant monitoring. A wearable ECG system is one of the best instruments for the continuous monitoring of heart disorders [22]. Smart wearables acquire the ECG signal and forward their findings to a testing center [22]. In addition, the embedded application can be used in the case of emergency to call the relevant emergency center or clinician with specifics of the patient's GPS tracking location [22].

Precise and accurate notification to healthcare experts of adverse conditions is important for emergency management. In this sense, a cloud-based mobile healthcare system could broaden and improve healthcare outreach, decision making, chronic illness treatment and the tackling of emergencies. Such a strategy could allow healthcare professionals to make timely decisions and provide the elderly with timely treatment and care.

The smart ECG wearable architecture remains challenging due to the strict size, weight and power consumption constraints. In these circumstances, self-powered smart sensors are usually preferred, as they allow a continuous acquisition of ECG signals without causing the patient too many restrictions. Important power savings in a wireless ECG system can only be accomplished by reducing the use of wireless transceivers, i.e. the transmission of processed data is favored over raw data [22]. To this end, several studies have been performed on the compression of the ECG signal [2325], non-uniform resampling [26] and event-driven data acquisition [12, 22].

11.3. ECG in healthcare

An electrocardiogram (ECG) is recorded non-invasively from the body surface of the patient. It presents the activities of the cardiovascular system of the patient in question in the form of an electrical signal. It is recorded with the help of appropriate electrodes which are placed in particular positions on the patient's body. A particular ECG recording zone is the patient's chest area [13]. Einthoven devised a variety of approaches and ideas in this regard. He invented certain useful concepts for the conditioning, processing and automatic categorization of different types of heart pulses. He also investigated the impact of electrode placement, at various body locations, on the performance of an automatic heartbeat identifier. During this process, various locations on the arms, legs and chest were considered. He also presented a theoretical model of the measurement and identification mechanism based on a single time-varying dipole [27]. Additionally, a differential lead based ECG signal recording was also outlined in [27]. In this case, the measurement was carried out between two predefined positions on the patient's body surface.

The ECG signal is the most important element in clinical practice. Through the introduction of computers, integrated circuits and mobile networks, the technological advances in electrocardiography have accelerated greatly. In addition, ECG enhancement turns out to be feasible in the presence of wireless connectivity and cloud computing. A variety of advanced materials have been investigated for ECG sensors. There are currently numerous nano-devices available which can acquire, analyse and convey ECG signals to the cloud. These are tiny, wireless and very light-weight devices and allow the effective monitoring of the cardiovascular functionality of the people concerned. Because of their tiny dimensions and light weight, the patient does not feel too much inconvenience while using these implants. Most of these implants allow non-invasive measurement of the ECG. Thanks to low-power architectures and electronics technology, the autonomous working life of these devices is extended. They allow the uninterrupted and live monitoring of a patient's cardiovascular system functionality for a much longer period compared to the traditional Holter recorders. These implants are not limited only to the monitoring of the cardiovascular system, but are also employed for other potential biomedical applications, such as the monitoring of patients with epilepsy, Alzheimer's, diabetes, high blood pressure and temperature. Additionally, they can also provide the geographical locality of the person concerned. This information is particularly useful for emergency cases [9].

A known clinical standard is to utilize the vital biomedical signs of the patient concerned for monitoring and diagnosis of health disorders. These signals are recorded using the smart nano-implants. The acquisition process introduces noise and artifacts into the signal. This can affect and corrupt the information which is contained by the signal and can result in an erroneous diagnosis. Therefore, it is important to prepare and precondition these signals before passing them to the analysing physicians. This task is performed by trained and qualified medical technicians. They denoise and condition the raw signal in order to prepare it for the medical consultants. This is realized using noise and artifact removal methods such as filtering and principle component analysis.

Clinical biomedical indications are still considered a common norm for the treatment of illnesses and diseases. Artifact-free and noise-free signals are prepared by qualified technicians before they are sent to the consulting physician for the diagnosis procedure. It is important to use wearable biomedical implants for the uninterrupted live surveillance, monitoring, analysis and diagnosis of cardiovascular system functionality. Multi-channel ECG signals are acquired via smart biomedical implants to automatically detect cardiac functionality disorders. An uninterrupted and long-duration ECG analysis is necessary for precise diagnosis and health condition identification of the cardiovascular system of the patient concerned. Overall it is a very complex and sophisticated mechanism and involves a variety of multidisciplinary advanced techniques and technologies. The system's findings should be effective and precise, otherwise harm to the patient could result and incorrect judgments could be made during diagnosis. Through using cloud infrastructure and software, this can be avoided through creating a tracking system between patients and physicians. In addition, some of the currently available wearable devices that acquire biomedical signals from patients are rarely used by physicians. This is due to the quality of the acquired data and its difficulty of interpretation. Different academic organizations and research and development teams have recognized these shortfalls and have begun designing applications for analysing critical signals. Some popular wearables already commercialized for analysing ECG signals are Muse [28], Epoc++ [29] and General Electrics's Holter ECG [30]. There are several exciting studies in the literature of smart health surveillance systems that have led to the development of many platforms, approaches and frameworks. Smart monitors and mobile applications provide solutions for delivering healthcare services to remote patients in digital health monitoring systems.

Salvador et al [31] developed a system for the provision of portable acquisition tools and cellular phones to cardiac patients that enable data transmission. However, the system only took into account patients in a stable state and deliberately omitted emergency conditions. Herscovici et al [32] investigated existing technologies for the construction of smart healthcare networks that identify emergencies in ambulances and rural health centers for mobile electrocardiography, and are completely combined with mobile telechography. Shih et al [33] established an integrated smart ECG system for the detection and monitoring of older patients using mobile devices. Ren et al [34] designed a smart healthcare network for patient tracking, with emphasis on security and sustainable wireless communication links. Xia et al [35] designed a cloud-based healthcare framework to track and evaluate ECG quality assessment, ECG enhancement and ECG parameter extraction in real time.

11.4. The proposed approach

The classical ECG diagnostic systems are static in nature, leading to device design constraints for the worst possible case [36, 37]. They capture the signal at a fixed frequency and then analyse it using fixed parameter based post modules. Thus they store and process a large amount of surplus samples. This increases the overall cost of computation, the volume of data transmission and the power consumption [37, 38]. To alleviate these shortcomings, this chapter proposes an event-driven acquisition, processing and classification system for ECG (see figure 11.2).

Figure 11.2.

Figure 11.2. A block diagram of the proposed system.

Standard image High-resolution image

11.4.1. Dataset

The functionality of the proposed framework is verified using the MIT-BIH Arrhythmia database [39]. A collection of 24 h outpatient ECG recordings is utilized in this analysis. The dataset contains arrhythmias which are clinically significant. The signal is band-limited to 60 Hz and is captured at a sampling frequency of 360 Hz with standard 11 bit resolution analog-to-digital conversion (ADC). Every ECG recording considered is divided into short time length windows. The windows lengths are selected as equal to 0.9 s. In order to study the performance of the proposed method in terms of identifying various categories of heart pulses, ECG recordings of five different categories are employed. The categories considered are right bundle branch block (RBBB), normal (N), premature ventricular contraction (PVC), left bundle branch block (LBBB) and atrial premature contraction (APC). A total of 1500 ECG instances are considered with 300 instances considered for each of the five classes. This equal representation of each class avoids any possible bias during the classification process.

11.4.2. The event-driven acquisition

In the proposed system, an analog bandpass filter is used to filter the incoming analog ECG signal y(t). The band-limited signal, x(t), is conveyed to an event-driven analog-to-digital converter (EDADC). The traditional ECG based cardiovascular system disorder identification and diagnosis systems are founded on the basis of the Nyquist theory. They utilize traditional ADCs and record and process the ECG signals at a fixed rate. Therefore, the choice of processing rate is made for the worst possible case. As a result, these solutions are not efficient, which is particularly true in the case of time-varying intermittent signals such as ECG. These disadvantages can be mitigated by embedding signal-piloted EDADCs in the system [37, 40]. The functional mechanism of EDADCs is completely different to traditional ADCs. EDADCs do not follow the Nyquist sampling criterion and the acquisition of a new sample is not triggered by a fixed rate clock. Instead, they are based on the mechanism of level crossing sampling (LCS) and respect the Bernstein sampling theorem [37]. The sampling acquisition is triggered by the phenomenon of threshold crossing. A new sample is acquired once the band-limited analog signal x(t) traverses any of the preset threshold levels. As a result, the incoming analog signal adapts the system's acquisition activity and sampling rate.

The signal discretized with LCS is repartitioned in a non-uniform time frame. The instants of sampling are defined by

Equation (11.1)

where tn is the present sampling instant, tn−1 is the prior instant, and dtn is the time difference between the present and the prior sampling instants [41]. For a selected EDADC amplitude dynamics, ΔV, and resolution, M, its quantum, q, can be calculated as $q=\frac{\Delta V}{{2}^{M-1}}$ [37, 39, 42]. The EDADC focuses on the relevant ECG information and avoids the unwanted information. They obtain a smaller number of samples relative to the classical equivalents. This greatly decreases the operations of the post-processing and transmission modules and achieves power-efficient implementation.

11.4.3. The event-driven segmentation

The EDADC output is segmented by the activity selection algorithm (ASA) [41]. It utilizes the non-uniformity of the sampling method to select only the important sections of the signal. The traditional windowing functions [40, 41] do not offer suitable features for the ASA. Only the necessary information of the signal is selected. Furthermore, the form and length of the window function are modified according to the local characteristics of the windowed signal [37, 41]. In addition, an efficient reduction of the phenomenon of leakage is also achieved [41].

11.4.4. The adaptive rate resampling and denoising

For a given resolution M, the EDADC sampling rate changes with the temporal variations of x(t). The highest sampling frequency of EDADC [37] is determined by

Equation (11.2)

where fmax is the x(t) highest frequency component, Ain is the input signal amplitude and Fsmax is the maximum sampling frequency of the EDADC. Let Wi represent the ith chosen segment which is produced by the ASA. If Fsi is the frequency of sampling for Wi , then mathematically it can be computed by the relation Fsi = Ni /Li . Li is measured in seconds and it is the time length of Wi . The count of samples which exist in Wi is denoted by Ni . There are plenty of mature and sophisticated signal treatment tools which are based on the traditional sampling mechanism [40, 41]. Therefore, to benefit from such methods and to integrate them in the suggested solution, each Wi is resampled uniformly. The resampling frequency is tuned by following the locally extracted parameters [41]. It resamples and processes the selected signal sections at the same or lower rates compared to to the conventional equivalents. Consequently, the proposed solution's computational advantage is greatly improved relative to conventional counterparts.

Let Frsi represents the resampling rate for the selected segment Wi ; then Frsi is chosen specifically for each Wi . A reference rate of sampling is selected in the system and is denoted as Fr . This Frsi selection mechanism is a function of Fr and Fsi . Then, Wi is resampled at a rate of Frsi and this results in a new count of samples, namely Nri . The online resampling is obtained using interpolation. The interpolation method updates the resampled signal characteristics with respect to the original signal [43]. For the devised method, the interpolation error depends on ΔV, M and the interpolation technique used [43].

Qaisar et al [43] showed that for the case of the EDADC, the maximum resampling error is bounded by q. However, it could be equal to ΔV in the traditional counterparts. This demonstrates the appropriateness of using quick and low-complexity interpolation approaches, rather than the conventional equivalents, in our proposed method. In addition, this study focuses on increasing the computational performance of wearable ECG devices and cloud-based computing. Consequently, the simplified linear interpolation (SLI) approach is considered for online resampling of the segmented signal parts. In SLI the value of xrn is set equal to the mean of its prior and next non-uniform samples and the highest error per interpolation is limited by $\frac{{q}}{{2}}$ [43].

It is necessary to condition the biomedical signals to reduce the impact of artifacts and noise. The conditioning process is frequently achieved by utilizing digital filters [44]. Traditional digital filtering is static in nature. The sampling rate is kept fixed and the digitized signal is conditioned using a fixed order filter. Therefore, the parameters, such as sampling rate and the order of filters, are selected for the worst possible case [45]. Such a choice could result in a remarkable but unnecessary increase in the processing load and power consumption of the conditioning stage. This performance limitation can be improved by using multirate processing concepts [46]. In this context, novel techniques of digital filtering are proposed which can adjust their rate of conditioning and the order of filter according to the incoming signal changes [47].

Keeping in mind the targeted application, an appropriate bank of reference digital filters is realized offline. The implementation is achieved for a set of reference sampling frequencies, denoted as Fref. Different elements of the set Fref are computed by utilizing a fixed step of frequency, denoted as Δ. This leads towards a uniformly spaced placement of various elements of the set Fref. The first element in the set Fref is Fsmin ⩾ 2FCmax and it defines the lower limit on the elements of set Fref. It assures an effective digital filtering process. The last element in the set Fref is Fr and it defines the upper bound on the elements of the set Fref.

For each selected segment one appropriate predesigned digital filter is selected during the real-time online signal conditioning. This selection is a function of the values of various elements of the set Fref and the real-time calculated value of Fsi . Depending on the signal variations, the value of Fsi can be specified and therefore an appropriate mechanism of the reference filter choice is adopted to obtain an effective and efficient denoising. For the case when Fsi ⩾ Fr , the offline designed reference filter built for Fr is used for Wi . However, if Fsi < Fr , then the reference filter whose corresponding value of Frefc , which is neighboring the Fsi is selected for Wi . Here, the index 'c' represents the real-time selected element of the set Fref for online conditioning of Wi .

This adaptation of Frsi causes Wi to be resampled closer to the Nyquist rates or at sub-Nyquist rates [47]. Hence, during data resampling and denoising procedures, it prevents unwanted interpolations and filtering operations. As a result, the proposed solution increases the computational and power efficiencies.

11.4.5. Extraction of features

The interesting features of the denoised signal are extracted using the autoregressive (AR) Burg method [15]. In this case, xrn is modeled as the output of an all-pole causal filter. The method for an Oth order model can be presented as

Equation (11.3)

where xrni is the input, aj are the model's weighing coefficients and bn is the variance of the white noise ${\sigma }^{2}$. An Oth order model could estimate the power spectral density (PSD) using the following equation, where Fs is the sampling frequency:

Equation (11.4)

11.4.6. Machine learning methods

The technological advancements in computing and electronics have evolved the utilization of machine learning approaches in a variety of applications in our modern era. The same trend is being followed in the domain of biomedical signal processing. A variety of mature machine learning algorithms have been introduced for the classification and diagnosis of biomedical signals. Certain practical examples are the identification of myocardial ischemia, epilepsy, heart attacks and cardiac hypertrophy. Such an approach is particularly beneficial in the case of emergencies. It can accelerate the process of analysis and decision making and can lead towards timely and effective treatment of the patient. Accurate and timely care measures can be taken through effective use of sophisticated machine learning approaches. Currently, various treatment approaches are being considered by medical practitioners. Certain common examples are electrocardiography, electroencephalography, magnetic resonance imaging and tomographic scanners [48].

Real-time and uninterrupted biomedical signal analysis is very important for tracking the health conditions of patients with critical conditions. The overall process consists of different stages of signal acquisition, conditioning, analysis and classification. It involves a smart combination of a variety of sophisticated techniques and technologies. An effective diagnosis can be realized by extracting pertinent features from the acquired and conditioned biomedical signals. This makes it convenient to identify the divergence of the intended signal from its standard parameters. The monitoring process should be robust and accurate in identifying the type of abnormality occurring. In addition, the doctor should be well trained in interpreting the detected abnormality as an accurate health disorder. Manual analysis of such data needs a lot of training and experience. This is very difficult, in particular in the case of big and multidimensional health data. Therefore, computer based machine learning methods can contribute a great deal in this respect [48].

Machine learning techniques may automate the process of biomedical signal analysis and classification between normal and pathological patterns by producing decisions to classify these patterns. A crucial feature of clinical surveillance has been the automated identification and recognition of physiological signals using various signal processing techniques [48]. Because biomedical signals provide such a large amount of data, the main problem for classification is how to classify the biomedical signal records. Next, the relevant features must be derived from the acquired biomedical signals, then the size of this feature set must be decreased and in next step the decreased size feature set is employed in the final classification stage. An algorithm that implements classification is called a classifier. A classifier knows how to use training sets to categorize the class of a function variable. The algorithms used for predicting categorical labels in the classification process are the k-nearest neighbors (k-NN), artificial neural network (ANNs), support vector machine (SVM), classification and regression tree (CART), random tree, REP tree, random forest and LAD tree [49] classifiers.

The artificial neural network (ANN) is a set of linked input and output units. Each link can have a specific weight. The learning is accomplished by adjusting these weights. It allows proper classification of test data. The ANN is inherently parallel in nature. Therefore, parallelization approaches can be used to speed up the classification [50]. The support vector machine (SVM) performs a nonlinear projection to maximize the dimension of the training data. It enables the most effective classifier to be found and can accurately differentiate the test data for each class. k-nearest neighbors (k-NN) learns and describes the test sample by contrasting it to the testing samples. k training instances are used to classify the considered instance. The decision is based on distance calculation. The classification and regression tree (CART) recognizes statistical reference variables, regression, but does not calculate rule sets. It constructs binary trees using the function and threshold that generate the maximum intelligence at each node. The REP tree develops a tree of prediction to manage the missing labels by splitting the given dataset into sections. The test set is defined by using models with a similarity exclusion effect applied to the nodes [50]. The random tree is based on the principle of an ensemble learning approach which generates distinct learners. A random set of data is generated by employing the bagging technique. It is later employed to raise the decision tree. Its classification principle is to pass the input variables to a collection of predictors. The classification vote of each predictor is employed to make the final decision. The label of the class is decided on the basis of the majority of votes. The LAD tree seeks to characterize data by splitting it with a linear plane, which implies that the data are generated independently from two different normal distributions with equivalent co-variance models for each group. To have different means, it finds the best position to place a line between the two distributions. It allows minimizing the approximation error. The random forest is based on a combination of predictors for a tree. The tree is based on values of a random vector which is sampled independently. Nonetheless, for all trees in the forest a common distribution is practiced. The classification error is a function of the accuracy and the connotation between different trees [50].

An ensemble of robust classifiers is used in the ensemble learning approaches. Such an approach is beneficial in terms of classification precision and can outperform single classifier based methods. It is a frequently used technique in a variety of machine learning domains. Recent technological advancements have expanded the use of ensemble classification approaches in biomedical applications. Ensemble classifications centering on the completely complex-valued networks of neural relaxation have been extended to the classification of images. They apply the same classification problem to many classifiers. Classifier tests with different accuracy ratings are paired with various methods such as average and voting in this form of study. Therefore, better predictive results can be obtained compared to a single classifier. In this chapter each considered single classifier, such as k-NN, ANN, SVM, etc, is also used with the bagging, AdaBoost and random subspace models in order to attain a higher arrhythmia classification accuracy [51].

In the case of bagging, different identically sized training datasets are chosen at random from the broader problem domain. Then using a different machine learning method, a decision tree can be created for each dataset. These trees may be expected to be identical in substance and to enforce the same classification for each new test case. This hypothesis, however, is usually incorrect, particularly if the training datasets are very small, as induction of the decision tree is not a stable process. Small changes in the training data therefore simply generate a different attribute that is selected at a specific node, with significant consequences for the subtree structure under that node. It shows that there are certain instances where multiple decision trees produce specific forecasts and others do not. Bagging attempts to balance the uncertainty of learning methods by simulating the procedure described before using a given training set, instead of obtaining independent domain datasets [52]. AdaBoost can be utilized for some bagging-like learning. To simplify the method, the learning algorithm is presumed to be able to modify weighted instances that are positive numbers. The presence of instance weight affects the way an error is measured for a classifier. Typically, in this case it is determined by dividing the weights accumulated by the cumulative weights of all instances into the misclassified instances. The weighting of instances allows the learning algorithm to concentrate on a given set of higher weighted instances. Such higher weighted instances are more critical than others as there is a greater inducement to identify them correctly [52]. Random subspace is an ensemble learning approach that aims to reduce the correlation among estimators in an ensemble by making them conditioned on random samples of features rather than the whole set of features. It is comparable to bagging except that the attributes for each learner are collected randomly, with substitution. This informally allows individual learners not to over-focus on features that tend to be incredibly predictive in the training set, but fail to be as reliable for points beyond that set. Random subspace is thus an appealing alternative for problems where the number of features is much greater than the number of training points.

11.5. The performance evaluation measures

The following evaluation measures are used.

11.5.1. Compression ratio

One criterion is to measure the system output in terms of the gain in compression. Classically, x(t) is captured at a fixed rate. The total sample count for a given time period ${{L}}_{{T}}$ is obviously calculated as $N={F}_{\mathrm{ref}}\times {L}_{T}$. The discrepancies of the incoming signal tune the sampling rate of the EDADC [37]. When the signal is smoother the sampling rate remains low and when the signal varies rapidly the sampling rate increases accordingly. This is the reason behind the change in the collected number of samples for each selected segment, even if the temporal length of each segment is kept equal to LT . The sampling density of the EDADC also depends on the resolution of the EDADC alongside the characteristics of the incoming signal [37]. Let us suppose that the designed approach collects NED samples for the time length of LT . Then, it is straightforward to calculate the compression ratio, RCOMP. It can be computed by calculating the ratio between N and NED. This process can be mathematically described as

Equation (11.5)

11.5.2. Computational complexity

The computational complexity of the front-end processing module is studied in depth up to the denoising stage. The complexity of the feature extraction and classification modules is evaluated at an intuitive level by taking into account the compression ratio of the proposed solution. The adaptive rate FIR (ARFIR) method is utilized for denoising the resampled signal [41].

The numerical complexity of a classical K order FIR filter is well known, and the entire computational complexity CFIR can be measured using

Equation (11.6)

The online adaptation requires additional operations in the case of an invented solution. First, a reference filter hck is selected for Wi . This real-time filter selection is performed using the successive approximation algorithm. It performs log2(Q) comparisons for the worst case. Here, Q is the length of set Fref, in other words it is the count of elements in this set. It is also necessary to resample the selected segments at the real-time computed resampling rate Frsi . The live resampling is achieved by using the SLI. For SLI, the count of operations for each Wi is Nri additions and Nri binary weighted divisions. The binary weighted divisions are realized by employing the one-bit right shift mechanism. The complexity of such an implementation is insignificant in contrast to the addition and multiplication operations [53]. With this background, the computational cost of SLI is approximated as Nri additions for Wi . After uniform resampling, the Wi is conditioned by utilizing a Ki th order digital filter. The count of operations during the conditioning process is Ki ·Nri multiplications and Ki Nri additions for Wi . The complexity of a comparison operation is assumed to be equal to that of an addition operation. This assumption allows us to evaluate the processing cost of the devised ARFIR with respect to that of the classical one. In this context the computational complexity of the ARFIR can be represented as

Equation (11.7)

11.5.3. Classification accuracy

The pertinent parameters of the denoised signal are extracted using the AR Burg method. Several robust classification algorithms use these derived parameters to identify the ECG categories. The effectiveness of the entire chain is evaluated in terms of the classification performance. A limited collection of data causes difficulties in estimating classification results and a cross-validation scheme is often used in this situation [50]. Therefore, this analysis utilizes the ten-fold cross-validation. For each classifier, the accuracy, F-measure, ROC field and Kappa values are evaluated using the same test set.

Accuracy is defined as the ratio of properly categorized instances against all instances and is given as

Equation (11.8)

where TP is true positives, TN is true negatives, FP is false positives and FN is false negatives.

The F-measure, defined as the harmonic mean of the accuracy and recall indicators, can be calculated as

Equation (11.9)

Receiver operating characteristic (ROC) analysis is one of the helpful tools that allows the evaluation of classifiers at multiple operating points, the collection of operating points and the correlation of operating points. The ROC curve displays the full range of different operating points in a single graph, with the corresponding varying rates of the true positives and false positives.

The Kappa statistic is a metric which takes into account the predicted figure by deducing it from the successes of the predictor. For a good predictor it expresses the outcome as a proportion of the sum. Kappa statistics are used to calculate the agreement between expected and actual classifications of a dataset, and to correct the agreement that occurs by chance.

11.6. Experimental results and discussion

11.6.1. Experimental results

The efficiency of the proposed solution is studied for the collection, processing and classification of the MIT-BIH arrhythmia dataset signals. In the classical case, each considered record is divided into fixed length windows of 0.9 s length. The signal is recorded at a fixed sampling rate of 360 Hz. Therefore, it results in 330 samples for each window. In the case of the suggested approach, the ECG signal is acquired with an EDADC of M = 5. Since the ECG signals are band-limited by fmax = 60 Hz, the maximum EDADC sampling frequency is Fsmax = 3.72 kHz (see equation (11.2)).

The output of EDADC is irregularly spaced in time. Therefore, it cannot be segmented by utilizing the standard windowing operations. In this context, the ASA is utilized for segmenting the EDADC output. The reference window length Lref is chosen as equal to 1 s [41]. Lref is tuned for each selected segment by the ASA. It mainly allows focusing on and treating the active signal portion while neglecting the unwanted and redundant baseline. This activity selection process augments the processing gain and the power effectiveness of the suggested method. By extracting the parameters of each selected segment the ASA permits adjusting the sampling frequency in a real-time manner and the denoising stage filter order as a function of the incoming signal disparities. This allows the process to concentrate only on the interesting sections of the signal and achieves a reduced number of samples for each selected window, in contrast to the traditional method. However, in the case of typical equivalents, despite incoming signal variations, the window length and the count of samples per window remain identical, 0.9 s and 330 samples, respectively. Consequently, this induces an increase in the processing and power efficiencies of the suggested approach.

For all 300 instances of each class the average compression ratios are determined using equation (11.8). These are 2.76, 2.25, 2.59, 2.61 and 2.69, respectively, for classes N, RBBB, LBBB, APC and PVC. The average compression gain obtained overall for all five classes is 2.6-fold.

The ASA's output is resampled uniformly using the SLI, and is conditioned using the ARFIR. A bank of bandpass filters is built offline. The cut-off frequencies of each filter are [FCmin = 0.5; FCmax = 30] Hz. It enables the emphasis to be on the ECG band of interest while attenuating the undesirable parts. It results in an increased precision of the extraction and classification modules for the post applications. The filter bank is implemented for a range of sample frequencies, Fref, between 76.25 Hz > 2FCmax to Fr = 320 Hz. In this case Δ = 32.5 Hz is chosen. It realizes a bank of Q = 8 bandpass FIR filters. A description of the bank of filters is shown in table 11.1.

Table 11.1.  Summary of the reference filter bank parameters.

hck h1k h2k h3k h4k h5k h6k h7k h8k
Fref c (Hz) 132.5 165 197.5 230 262.5 295 327.5 360
K c 43 53 64 75 85 96 107 117

Depending on the incoming signal variations and the employed EDADC resolution, the real-time calculated resampling frequency, Frsi , could be specific for each Wi and it is aligned in a real-time manner with the chosen reference filter's sampling frequency, Frefc . The conditioning stage improves the acquired signal's signal to noise ratio (SNR) and allows us to achieve an improved classification performance [12, 13]. The employed adaptive rate denoising tactic obtains the conditioning of the signal with a reduced number of operations in contrast to the traditional counterparts [41]. The average reductions, compared to the classical method, in the number of operations of the devised ARFIR are calculated for all 300 instances of each class considered. In terms of additions these are, respectively, 5.95, 4.62, 5.47, 5.64 and 5.93 for classes N, RBBB, LBBB, APC and PVC. In terms of multiplications these are, respectively, 6.08, 4.71, 5.59, 5.77 and 6.06 for classes N, RBBB, LBBB, APC and PVC. The average gain for all five classes is also computed. The gain in the additions and multiplications are 5.52-fold and 5.64-fold, respectively.

The AR Burg method is employed to extract the discriminative and informative features of the conditioned signal. These attributes are then used for classification. For the classical case, summaries of the classification results obtained using the different classifiers are described in tables 11.2 and 11.3, and the classification results obtained by applying the proposed event-driven acquisition and processing-based approach are presented in tables 11.4 and 11.5.

Table 11.2.  Summary of the classification accuracies and F-measures for the classical case.

  Accuracy F-measure
  Single Bagging AdaBoost Random subspace Single Bagging AdaBoost Random subspace
SVM 0.881 0.876 0.891 0.847 0.882 0.877 0.893 0.848
k -NN 0.9 0.893 0.9 0.907 0.899 0.892 0.899 0.907
ANN 0.868 0.891 0.868 0.883 0.866 0.89 0.866 0.882
Random forest 0.957 0.957 0.958 0.951 0.957 0.957 0.958 0.951
CART 0.901 0.929 0.948 0.935 0.901 0.929 0.948 0.935
REP tree 0.91 0.929 0.949 0.933 0.91 0.929 0.949 0.934
Random tree 0.891 0.942 0.897 0.952 0.891 0.942 0.897 0.952
LAD tree 0.887 0.917 0.935 0.914 0.887 0.917 0.935 0.915

Table 11.3.  Summary of ROC areas and kappa statistics for the classical case.

  ROC area Kappa
  Single Bagging AdaBoost Random subspace Single Bagging AdaBoost Random subspace
SVM 0.952 0.966 0.98 0.959 0.8508 0.845 0.8642 0.8083
k-NN 0.975 0.978 0.946 0.989 0.875 0.8667 0.875 0.8842
ANN 0.97 0.983 0.966 0.982 0.835 0.8642 0.835 0.8533
Random forest 0.996 0.997 0.996 0.996 0.9458 0.9458 0.9475 0.9383
CART 0.964 0.993 0.995 0.993 0.8767 0.9117 0.935 0.9183
REP tree 0.965 0.992 0.994 0.993 0.8875 0.9117 0.9367 0.9167
Random tree 0.932 0.992 0.935 0.992 0.8642 0.9275 0.8708 0.94
LAD tree 0.978 0.988 0.992 0.99 0.8583 0.8958 0.9192 0.8925

Table 11.4.  The summary of classification accuracies and F-measures for the proposed approach.

  Accuracy F-measure
  Single Bagging AdaBoost Random subspace Single Bagging AdaBoost Random subspace
SVM 0.937 0.929 0.93 0.927 0.938 0.93 0.93 0.927
k-NN 0.931 0.92 0.928 0.93 0.931 0.92 0.928 0.93
ANN 0.93 0.941 0.93 0.945 0.93 0.941 0.93 0.945
Random forest 0.943 0.943 0.943 0.947 0.943 0.943 0.943 0.947
CART 0.894 0.914 0.93 0.927 0.894 0.914 0.93 0.927
REP tree 0.873 0.909 0.925 0.914 0.873 0.909 0.925 0.914
Random tree 0.868 0.931 0.88 0.935 0.868 0.931 0.88 0.935
LAD tree 0.79 0.851 0.901 0.855 0.793 0.852 0.901 0.856

Table 11.5.  The summary of ROC areas and kappa statistics for the proposed approach.

  ROC area Kappa
  Single Bagging AdaBoost Random subspace Single Bagging AdaBoost Random subspace
SVM 0.981 0.988 0.988 0.987 0.9217 0.9117 0.9125 0.9083
k-NN 0.985 0.989 0.971 0.992 0.9142 0.9 0.91 0.9125
ANN 0.989 0.995 0.987 0.995 0.9125 0.9258 0.9125 0.9317
Random forest 0.996 0.996 0.996 0.997 0.9283 0.9283 0.9292 0.9333
CART 0.96 0.991 0.991 0.994 0.8675 0.8925 0.9125 0.9092
REP tree 0.962 0.991 0.992 0.992 0.8408 0.8858 0.9058 0.8925
Random tree 0.918 0.992 0.925 0.994 0.835 0.9133 0.85 0.9183
LAD tree 0.942 0.977 0.99 0.978 0.7375 0.8142 0.8767 0.8192

Table 11.2 shows the results for the five-class ECG dataset in terms of the classification precision and the F-measure values for the classical case. Among the single classifiers, the highest classification accuracy of 95.7% is achieved by the random forest. A minor increase in performance is achieved by the ensemble of random forest with AdaBoost, with an accuracy of 95.8%. Among the single classifiers, the highest F-measure value of 0.957 is also achieved by the random forest. Among the ensemble classifiers, the random forest with AdaBoost secures the highest F-measure value of 0.958. It is apparent that the accuracies of the weak classifiers are improved by the ensemble classifier models.

Table 11.3 shows the results for the five-class ECG dataset in terms of the ROC area and the kappa statistics for the classical sampling case. Among the single classifiers, the highest ROC area value of 0.996 is achieved by the random forest. A similar performance is achieved by the ensemble of random forest with Adaboost and random forest with random subspace. However, a minor performance increase is obtained by the random forest with bagging, achieving an ROC area value of 0.997. Among the single classifiers, the highest kappa value of 0.946 is also achieved by the random forest. Among the ensemble classifiers, the random forest with bagging secures a similar performance. A minor increase is obtained by the random forest with AdaBoost by obtaining a kappa value of 0.948. Moreover, it is clearly seen that the accuracies of the weak classifiers are improved by the ensemble classifier models.

The above results (tables 11.2 and 11.3) show that for the classical case the chain of classical ADC, windowing, FIR filter and AR Burg achieves the best classification performance with the random forest. A minor increase in the performance is achieved by using a smart ensemble of random forest with AdaBoost. In addition, the accuracies of the weak classifiers are improved by the ensemble classifier models.

Table 11.4 shows results for the five-class ECG dataset in terms of the classification precision and the F-measure values for the devised chain. Among the single classifiers, the highest classification accuracy of 94.3% is achieved by the random forest. A similar performance is achieved by the ensembles of random forest with bagging and random forest with AdaBoost. A minor increase is obtained by the ensemble of ANN with random subspace by achieving a precision of 94.5%. Overall, the highest classification accuracy of 94.7% is achieved by the ensemble of random forest with random subspace. Among the single classifiers, the highest F-measure value of 0.943 is achieved by the random forest. A similar performance is achieved by the ensembles of random forest with bagging and random forest with AdaBoost. A minor increase is obtained by the ensemble of ANN with random subspace by achieving an F-measure of 0.945. Overall, the highest F-measure value of 0.947 is secured by the ensemble of random forest with random subspace. In this case, the accuracies of the weak classifiers are improved by the ensemble classifier models as well.

Table 11.5 shows the results for the five-class ECG dataset in terms of the ROC area and the kappa statistics for the devised chain. Among the single classifiers, the highest ROC area value of 0.996 is achieved by the random forest. A similar performance is achieved by the ensembles of random forest with bagging and random forest with AdaBoost. A minor increase is achieved by the ensemble of random forest with random subspace, achieving an ROC area value of 0.997. Among the single classifiers, the highest Kappa value of 0.928 is achieved by the random forest. A similar performance is achieved by the ensemble of random forest with bagging. A minor increase is obtained by the ensemble of random forest with AdaBoost, achieving a kappa value of 0.929. A further increase is achieved by the ensemble of ANN with random subspace, yielding a kappa value of 0.932. Overall, the highest kappa value of 0.933 is achieved by the ensemble of random forest with random subspace. One more point that should be mentioned is again that the accuracies of the weak classifiers are improved by the ensemble classifier models.

The above results (tables 11.4 and 11.5) show that, for the suggested event-driven chain of the EDADC, ASA, ARFIR and AR Burg method, random forest achieves the best classification performance. A minor increase in the performance is achieved using a smart ensemble of random forest with random subspace.

11.6.2. Discussion

Recent advancements in wireless sensor techniques and technologies enhance the development and deployment of smart implants in general and the wireless and self-powered biomedical implants in particular. A variety of biomedical implants have been introduced in the literature. The remote monitoring and diagnosis of patients is a beneficial concept and is particularly helpful for the remote and rural communities of the developing word. Inspired by its promising benefits this domain of research has attracted many researchers and industries for different potential domains, such as computation, information science and electronics. A recent trend is the integration of cloud-based applications in modern mobile healthcare solutions. By considering the developments and results in the cloud-based mobile healthcare sector, it can be imagined how much this approach will expand and be deployed further in the near future. It will revolutionize the sensor, computer and networking fields of research and development.

Cloud-based biomedical signal processing for the automatic diagnosis of health condition monitoring is a novel trend. It is evident that it is going to play an important role in future mobile healthcare solutions. One key application is to automatically detect cardiovascular system disorders to avoid any possible heart failures. One of the aims of this chapter is to reflect many of these developments and also to recognize other essential research issues in the mobile healthcare framework. It is based on smart wearables that are linked to cloud-based applications.

The benefits of the proposed method are evident from the results. It employs an intelligent combination of the event-driven acquisition and the adaptive rate processing mechanisms. It is capable of adjusting Li , Frsi and Ni as a function of the incoming signal variations. This ensures the dynamic parameter adaptation of the suggested solution in accordance with the incoming signal disparities. It is shown how the adaptation of Li and Ni automates the real-time tuning of Frsi . This real-time adjustment of Frsi enhances the processing effectiveness and the power efficiency of the designed approach in contrast to the traditional equivalent. It is obtained by omitting needless operations while performing the real-time and online interpolation of the active signal portions. It also avoids the collection and conditioning of irrelevant information and leads towards an effective denoising mechanism. Additionally, the real-time adoption of filter orders also adds to the effectiveness of the suggested conditioning mechanism [11].

One crucial function is the consistent and effective mobile supervision of chronically ill patients. In this chapter we developed an event-driven and adaptive rate cloud-based remote health monitoring system. It utilizes ECG signals for arrhythmia categorization. The primary objective of this chapter is to illustrate how wireless biomedical instruments can be used to anticipate and detect chronic disorders in real time, using an automated, smart and scalable cloud-based mobile healthcare monitoring framework. The suggested approach was checked as being practical, accurate and successful in the measurement and evaluation of cardiac health monitoring. The advances in mobile device and sensor technologies include innovative ways to improve the quality of care for patients with chronic disorders. The proposed approach is new and has the ability to improve healthcare technology and medical systems for the coming age.

11.7. Conclusion

In this chapter we developed an event-driven and adaptive rate cloud-based remote cardiac health monitoring system. The proposed approach is new, and has the ability to improve the healthcare technology and medical systems in future.

The results show that, as a whole, the average reductions in the count of operations in terms of additions and multiplications, respectively, for the adaptive rate filtering in contrast to the classical equivalent are 5.52-fold and 5.64-fold. In addition, the proposed approach also achieves a notable reduction in the acquired count of samples, for the studied case it is 2.6-fold. This ensures an important enhancement in the effectiveness of the post-processing stages, such as conditioning, transmission and classification. Moreover, a more than 50% reduction in bandwidth use is also ensured by these findings. On the side of cloud-based processing, the transformation of a 2.6-fold reduced count of samples is realized with reduced order Auto Regressive Burg models in contrast to the fixed rate classical equivalents. It ensures a more than 50% decrease in the real-time count of operations of the feature extraction process in contrast to the traditional methods. A similar amount of gain in terms of processing effectiveness is also expected for the classification stage. Furthermore, a simpler 5 bit resolution EDADC is employed in the proposed approach for the recording of ECG signals. In contrast a relatively complex 11 bit resolution ADC is utilized for the ECG signals recorded in the case of traditional equivalents. This ensures a simpler circuit level realization of the proposed method in contrast to the conventional equivalents.

The highest accuracy obtained by the proposed method is 1.1% lower than the classical counterpart's highest classification accuracy. This shows that the suggested solution achieves comparable classification accuracy compared with the traditional counterpart while visibly out-performing it in terms of processing, transmission and hardware simplicity.

System output is a function of the utilized resampling, denoising, feature extraction and classification approaches. The study of device effectiveness in terms of processing cost and precision while utilizing alternative methods of feature mining, such as the round cosine transform and the discrete wavelet transform, is a future work. Certain other research opportunities are to explore the miniaturization, optimization and integrated development of the proposed solution.

Acknowledgments

This work is funded by Effat University under the grant number UC#7/28Feb 2018/10.2–44g.

References

  • [1]Arunkumar K and Bhaskar M 2019 Heart rate estimation from photoplethysmography signal for wearable health monitoring devices Biomed. Signal Process. Control 50 1–9
  • [2]Baig M M, Afifi S, GholamHosseini H and Mirza F 2019 A systematic review of wearable sensors and IoT-based monitoring applications for older adults—a focus on ageing population and independent living J. Med. Syst. 43 233
  • [3]Atalay O, Atalay A, Gafford J and Walsh C 2018 A highly sensitive capacitive‐based soft pressure sensor based on a conductive fabric and a microporous dielectric layer Adv. Mater. Technol. 1700237
  • [4]Majumder S, Chen L, Marinov O, Chen C H, Mondal T and Deen M J 2018 Noncontact wearable wireless ECG systems for long-term monitoring IEEE Rev. Biomed. Eng. 11 306–21
  • [5]Venkatesan C, Karthigaikumar P and Satheeskumaran S 2018 Mobile cloud computing for ECG telemonitoring and real-time coronary heart disease risk detection Biomed. Signal Process. Control 44 138–45
  • [6]Catarinucciet L et al 2015 An IoT-aware architecture for smart healthcare systems IEEE Internet Things J. 515–26
  • [7]Wang X, Zhu Y, Ha Y, Qiu M and Huang T 2016 An FPGA-based cloud system for massive ECG data analysis IEEE Trans. Circuits Syst. II Express Briefs 64 309–13
  • [8]Wang X, Gui Q, Liu B, Jin Z and Chen Y 2013 Enabling smart personalized healthcare: a hybrid mobile-cloud approach for ECG telemonitoring IEEE J. Biomed. Health Inform. 18 739–45
  • [9]Guzik P and Malik M 2016 ECG by mobile technologies J. Electrocardiol. 49 894–901
  • [10]Serhani M A, Menshawy M E and Benharref A 2016 SME2EM: smart mobile end-to-end monitoring architecture for life-long diseases Comput. Biol. Med. 68 137–54
  • [11]Qaisar S M 2019 Efficient mobile systems based on adaptive rate signal processing Comput. Electr. Eng. 79 106462
  • [12]Qaisar S M and Subasi A 2018 An adaptive rate ECG acquisition and analysis for efficient diagnosis of the cardiovascular diseases pp 177–81
  • [13]Subasi A, Bandic L and Qaisar S M 2020 Innovation in Health Informatics  (Amsterdam: Elsevier)  pp 217–43
  • [14]Mozaffarianet D et al 2016 Executive summary: heart disease and stroke statistics—2016 update: a report from the American Heart Association Circulation 133 447–54
  • [15]Alickovic E and Subasi A 2015 Effect of multiscale PCA de-noising in ECG beat classification for diagnosis of cardiovascular diseases Circuits Syst. Signal Process. 34 513–33
  • [16]Phukpattaranont P 2015 QRS detection algorithm based on the quadratic filter Expert Syst. Appl. 42 4867–77
  • [17]Hesar H D and Mohebbi M 2016 ECG denoising using marginalized particle extended Kalman filter with an automatic particle weighting strategy IEEE J. Biomed. Health Inform. 21 635–44
  • [18]Rodríguez R, Mexicano A, Bila J, Cervantes S and Ponce R 2015 Feature extraction of electrocardiogram signals by applying adaptive threshold and principal component analysis J. Appl. Res. Technol. 13 261–9
  • [19]Linh T H 2018 A solution for improvement of ECG arrhythmia recognition using respiration information Vietnam J. Sci. Technol. 56 335
  • [20]da S. Luz E J, Schwartz W R, Cámara-Chávez G and Menotti D 2016 ECG-based heartbeat classification for arrhythmia detection: a survey Comput. Methods Programs Biomed. 127 144–64
  • [21]Li P et al 2016 High-performance personalized heartbeat classification model for long-term ECG signal IEEE Trans. Biomed. Eng. 64 78–86
  • [22]Zhang X and Lian Y 2014 A 300-mV 220-nW event-driven ADC with real-time QRS detection for wearable ECG sensors IEEE Trans. Biomed. Circuits Syst. 834–43
  • [23]Rezaii T Y, Beheshti S, Shamsi M and Eftekharifar S 2018 ECG signal compression and denoising via optimum sparsity order selection in compressed sensing framework Biomed. Signal Process. Control 41 161–71
  • [24]Shaw L, Rahman D and Routray A 2018 Highly efficient compression algorithms for multichannel EEG IEEE Trans. Neural Syst. Rehabil. Eng. 26 957–68
  • [25]Chen S, Hua W, Li Z, Li J and Gao X 2017 Heartbeat classification using projected and dynamic features of ECG signal Biomed. Signal Process. Control 31 165–73
  • [26]Niederhauser T, Haeberlin A, Jesacher B, Fischer A and Tanner H 2017 Model-based delineation of non-uniformly sampled ECG signals pp 1–4
  • [27]Einthoven W 1903 The string galvanometer and the human electrocardiogram vol 6 pp 107–15
  • [28]MUSE Meditation Made Easy Muse: the Brain Sensing Headband http://choosemuse.com/ (accessed 19 Oct. 2018) 
  • [29]Emotiv https://emotiv.com/ (accessed 19 Oct. 2018) 
  • [30]GE Healthcare https://gehealthcare.co.uk/ (accessed 19 Oct. 2018) 
  • [31]Salvador C H et al 2005 Airmed-cardio: a GSM and Internet services-based system for out-of-hospital follow-up of cardiac patients IEEE Trans. Inf. Technol. Biomed. 73–85
  • [32]Herscoviciet N et al 2007 m-health e-emergency systems: current status and future directions [Wireless corner] IEEE Antennas Propag. Mag. 49 216–31
  • [33]Shih D H, Chiang H S, Lin B and Lin S B 2010 An embedded mobile ECG reasoning system for elderly patients IEEE Trans. Inf. Technol. Biomed. 14 854–65
  • [34]Ren Y, Werner R, Pazzi N and Boukerche A 2010 Monitoring patients via a secure and mobile healthcare system IEEE Wirel. Commun. 17 59–65
  • [35]Xia H, Asif I and Zhao X 2013 Cloud-ECG for real time ECG monitoring and analysis Comput. Methods Programs Biomed. 110 253–9
  • [36]Qaisar S M et al 2017 Time-domain characterization of a wireless ECG system event driven A/D converter pp 1–6
  • [37]Qaisar S M, Yahiaoui R and Gharbi T 2013 An efficient signal acquisition with an adaptive rate A/D conversion pp 124–9
  • [38]Jin S W, Li J J, Li Z N and Wang A X 2017 A hysteresis comparator for level-crossing ADC pp 7753–7
  • [39]Moody G B and Mark R G 2001 The impact of the MIT-BIH arrhythmia database IEEE Eng. Med. Biol. Mag. 20 45–50
  • [40]Welch T B, Wright C H and Morrow M G 2016 Real-time Digital Signal Processing from MATLAB to C with the TMS320C6x DSPs  (Boca Raton, FL: CRC Press) 
  • [41]Qaisar S M, Fesquet L and Renaudin M 2014 Adaptive rate filtering a computationally efficient signal processing approach Signal Process. 94 620–30
  • [42]Allier E, Sicard G, Fesquet L and Renaudin M 2003 A new class of asynchronous A/D converters based on time quantization pp 196–205
  • [43]Qaisar S M, Akbar M, Beyrouthy T, Al-Habib W and Asmatulah M 2016 An error measurement for resampled level crossing signal pp 1–4
  • [44]Kohler B U, Hennig C and Orglmeister R 2002 The principles of software QRS detection IEEE Eng. Med. Biol. Mag. 21 42–57
  • [45]Tan L and Jiang J 2018 Digital Signal Processing: Fundamentals and Applications  (New York: Academic) 
  • [46]Noh D and Katsianos T 2018 Multi-rate system for audio processing US Patent Application No. 
  • [47]Qaisar S M, Fesquet L and Renaudin M 2006 Spectral analysis of a signal driven sampling scheme pp 1–5
  • [48]Begg R, Lai D T and Palaniswami M 2007 Computational Intelligence in Biomedical Engineering  (Boca Raton, FL: CRC Press) 
  • [49]Siuly S, Li Y and Zhang Y 2016 EEG Signal Analysis and Classification  (Berlin: Springer) 
  • [50]Subasi A 2019 Practical Guide for Biomedical Signals Analysis Using Machine Learning Techniques: A MATLAB Based Approach  (New York: Academic) 
  • [51]Saraswathi D and Srinivasan E 2014 An ensemble approach to diagnose breast cancer using fully complex-valued relaxation neural network classifier Int. J. Biomed. Eng. Technol. 15 243–60
  • [52]Hall M, Witten I and Frank E 2011 Data Mining: Practical Machine Learning Tools and Techniques  (Burlington, MA: Kaufmann) 
  • [53]Cavanagh J 2017 Computer Arithmetic and Verilog HDL Fundamentals  (Boca Raton, FL: CRC Press) 

Export references: BibTeX RIS