Despite the fact that the photoplethysmography (PPG) has been one of the standard monitoring modalities for vital parameters for many years, the waveform of the PPG is ignored in the clinical daily routine. That kind of attitude towards the PPG waveform changed in recent years, and is still changing. An ever increasing number of research is examining the question to what extent physiological information beyond the blood oxygen saturation could be drawn from the PPG. One important approach to elicit that information from the PPG is the analysis of its waveform.
One prominent example for the value of PPG waveform analysis that has emerged is that of hemodynamic compensation assessment in the peri-operative setting or trauma situations. It could be shown that the width of the digital volume pulse measured by a photoplethysmographic device on the index finger is sensitive to changes in the systemic vascular resistance and thus allows for early detection of active compensatory mechanisms during hemorrhage . Recently Convertino et al.  demonstrated how modern machine learning techniques can be beneficially utilized to reveal subtle waveform features that trend the compensatory phase of hemorrhage.
Digital pulse waveform dynamically changes, among others, with alterations in vascular tone or changes in pulse wave velocity induced for example by blood pressure variations or medication. The aim of this work is the development of a classification algorithm that automatically groups individual digital volume pulses measured with photoplethysmographic devices based on their specific waveform.
This paper is organized as follows. In Section 2, the class definitions for grouping of the pulses are given. Also, the developed approach for automated pulse classification is outlined. The results are presented in Section 3. Finally, concluding remarks are given in Section 4.
2.1 Class definitions for digital volume pulses
In their pioneering paper about pulse waveform analysis , Dawber et al. defined four classes for the categorization of arterial pulse waves:
Class 1: A dicrotic notch (i.e. an incisura) is apparent on the downward slope of the pulse wave.
Class 2: No distinct dicrotic notch is visible, but the downward slope becomes horizontal.
Class 3: No distinct dicrotic notch is visible, but a well-defined change in the angle of descent is observed.
Class 4: No evidence of a dicrotic notch is visible.
An example waveform for each class is presented in Figure 1. Under the most simple assumptions digital volume pulses can be thought of as a superposition of two Gaussian shaped waves, where the leading (systolic) wave is the direct wave that travels from the heart to the finger after coronary contraction and the lagging (diastolic) wave results from reflections mainly at the Iliac arteries in the lower body.
Although Dawber et al. based their definitions on the assessment of arterial pressure pulses measured with a pressure cuff on the finger, these classes are equally applicable to digital volume pulse signals recorded by a photoplethysmographic device, as Millasseau et al. have shown that a simple transfer function can be found to transform the volume pulses into pressure pulses .
2.2 Automated classification of pulse waves
In this work, we present an algorithm based on modern machine learning techniques that automatically finds individual pulses in photoplethysmographic signals and sorts them into one of the pulse classes defined by Dawber et al. The main steps of our approach is outlined in Figure 2.
At first, in the pre-processing stage, the raw signals are filtered by an 8th order, zero-phase Butterworth filter with cut-off frequencies of 0.2 Hz and 16 Hz, respectively, in order to remove noise and baseline variations. The high-pass cut-off frequency was chosen because for higher cut-off frequencies, not neglegible pulse shape distortions can be observed. The low-pass cut-off frequency was found empirically and constitutes a good trade-off between shape distortions and high-frequent noise rejection.
In the pulse detection state, individual pulses are found by means of an open-source peak-detection algorithm presented by Zong et al. .
In the next stage, a set of features that facilitates separating the pulse waveform classes is calculated for every single pulse. Initially, we designed 24 features with the goal to map the information on class affiliation, that is easily visible for the human eye, onto numerical values that the classification algorithms can utilize for model training and subsequent classification. We identified a subset of 13 features that showed a positive interaction with respect to the classification problem by means of the INTERACT  feature selection algorithm. A list of the employed features including a short description and corresponding abbreviations is given in Table 1.
The features are either calculated from the time domain signal (see Figure 3) or its first derivative (see Figure 4). For better generalization, all features are calculated on pulses with normalized peak-to-peak amplitude.
In the classifier training stage, modern machine learning algorithms are trained to generate a model that is able to automatically group unknown pulses into one of the four classes. In this work, we utilized and evaluated an artificial neural network (ANN), a support vector machine (SVM) and a bagged tree classifier (BTC). The SVM algorithm is only applicable to binary separation problems, therefore we decomposed the four-class pulse classification into a sequence of two binary classification steps. In order to avoid overfitting during classifier training and to get an unbiased estimation of classification performance, we divided all available data into training and test sets. Consequently, the algorithms learned only from the training sets and performance metrics were solely calculated from the test sets.
In this section we present the results of the performance evaluation of the proposed approach. We carried out simulations on two major datasets. A summary of the class distribution within the two datasets can be found in Table 2.
The first dataset comprises a measurement study with ten healthy subjects, eight males and two females, aged between 21 and 28 years, that was conducted at our laboratory. During each measurement, a custom-made, wireless PPG sensor node that is part of our body sensor network rBSN  was attached to the right index finger. The sample rate of the sensor was 500 Hertz. All measurements were conducted in standing position. At the beginning of the measurement, the subject’s right arm was pointing towards the floor. After 1 min elapsed, the right arm was lifted by 45°. This was repeated until the subject’s arm pointed towards the ceiling. This measurement protocol was developed and chosen, because incrementally lifting the arm reliably generates the digital volume pulses that gradually shift from class I to IV.
The second dataset was extracted from the MIMIC II waveform database , which is available online from the PhysioNet archive. Because accurately timed annotations of cardiovascular events that dynamically change the digital pulse waveform are missing, we used the patients age to identify recordings with different pulse classes. For obtaining class I pulses, we chose recordings from patients aged 35 years and less. Class II and class III pulses could be extracted from recordings of patients aged between 40 and 65. Class IV pulses were obtained from subjects older than 80 years. The PPG recordings in the MIMIC II database were sampled at 125 Hz. Reference labelling was obtained by consensus of multiple experts after independent visual inspection. We estimated the performance in form of the F1-score as it is a measure of a classifiers accuracy that considers both precision and recall. F1 = 1 means perfect classification and F1 = 0 means that all classifier decisions are wrong.
The classification results on the test datasets are given in Table 3. As one can see, in the arm lift dateset, all classifiers produce comparable results. In the PhysioNet dataset, the ANN and the BTC get slightly higher F1-scores for class II than the SVM. In return, the SVM gets a better F1-score in class III. Overall, the detection accuracy for class I pulses is much better than for the others. This effect can be explained by taking a look at class distribution shown in Table 2. As there are much more class I pulses in the training datasets, their shape is learned better. Furthermore, the separation of class II from class III and class III from class IV pulses during the annotation process was not trivial, so labelling errors might also cause a degradation of the classification performance. To evaluate if a combination of the classifiers could be used to improve classification, we fused the classifiers by calculating the mean of their individual class votes and then rounding the result. The result of this fusion can be found in the last row in Table 3. For both datesets, this fusion did not lead to a significant improvement, hence suggesting that the classifiers have a strong agreement also when they are misclassifying.
Due to its unobtrusiveness and its cheap and easy obtainability, automated interpretation of the digital volume pulse waveform measured by photoplethysmography will play a significant role in the development of novel hemodynamic monitoring devices. In this work we have successfully demonstrated a novel approach for pulse waveform classification based on modern machine learning techniques that automatically groups digital volume pulses into the waveform classes defined by Dawber et al. This information could for example prove valuable in the assessment of the cardiovascular system during surgical operations or monitoring on intensive care units. In future works, we aim at improving the classification performance by exploring more features.
Research funding: The author state no funding involved. Conflict of interest: Authors state no conflict of interest. Material and Methods: Informed consent: Informed consent has been obtained from all individuals included in this study. Ethical approval: The research related to human use complies with all the relevant national regulations, institutional policies and was performed in accordance with the tenets of the Helsinki Declaration, and has been approved by the authors’ institutional review board or equivalent committee.
Awad AA, Haddadin AS, Tantawy H, Badr TM, Stout RG, Silverman DG, et al. The relationship between the photoplethysmographic waveform and systemic vascular resistance. J Clin Monit Comput. 2007;21:365–72. Google Scholar
Convertino VA, Grudic G, Mulligan J, Moulton S. Estimation of individual-specific progression to impending cardiovascular instability using arterial waveforms. J Appl Physiol. 2013;115:1196–202. Google Scholar
Dawber T, Thomas H, McNamara P. Characteristics of the dicrotic notch of the arterial pulse wave in coronary heart disease. Angiology. 1973;24:244–55. Google Scholar
Millasseau SC, Ritter JM, Takazawa K, Chowienczyk PJ. Contour analysis of the photoplethysmographic pulse measured at the finger. J Hypertens. 2006;24:1449–56. Google Scholar
Zong W, Heldt T, Moody G, Mark R. An open-source algorithm to detect onset of arterial blood pressure pulses. In: Computers in Cardiology; 2003. p. 259–62. Google Scholar
Zhao Z, Liu H. Searching for interacting features. In: International Joint Conference on Artificial Intelligence; 2007. Google Scholar
Pflugradt M, Mann S, Tigges T, Görnig M, Orglmeister R. Multi-modal signal acquisition using a synchronized wireless body sensor network in geriatric patients. Biomed Eng. 2015;61:57–86. Google Scholar
Saeed M, Villarroel M, Reisner AT, Clifford G, Lehman LW, Moody G, et al. Multiparameter intelligent monitoring in intensive care II (MIMIC-II): a public-access intensive care unit database. Crit Care Med. 2011;39:952–60. Google Scholar
About the article
Published Online: 2016-09-30
Published in Print: 2016-09-01
Citation Information: Current Directions in Biomedical Engineering, Volume 2, Issue 1, Pages 203–207, ISSN (Online) 2364-5504, DOI: https://doi.org/10.1515/cdbme-2016-0046.
©2016 Timo Tigges et al., licensee De Gruyter.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License. BY-NC-ND 4.0