Skip to main content

For a clinical application of optical triangulation to assess respiratory rate using an RGB camera and a line laser

Abstract

This paper presents a non-contact and unrestrained respiration monitoring system based on the optical triangulation technique. The proposed system consists of a red-green-blue (RGB) camera and a line laser installed to face the frontal thorax of a human body. The underlying idea of the work is that the camera and line laser are mounted in opposite directions, unlike other research. By applying the proposed image processing algorithm to the camera image, laser coordinates are extracted and converted to world coordinates using the optical triangulation method. These converted world coordinates represent the height of the thorax of a person. The respiratory rate is measured by analyzing changes of the thorax surface depth. To verify system performance, the camera and the line laser are installed on the head and foot sides of a bed, respectively, facing toward the center of the bed. Twenty healthy volunteers were enrolled and underwent measurement for 100s. Evaluation results show that the optical triangulation-based image processing method demonstrates non-inferior performance to a commercial patient monitoring system with a root-mean-squared error of 0.30rpm and a maximum error of 1rpm (\(p > 0.05\)), which implies the proposed non-contact system can be a useful alternative to the conventional healthcare method.

Peer Review reports

Introduction

With the increase of average life expectancy, there is growing interest in maintaining healthy life. Particularly, various studies have been conducted to monitor health-related biosignals such as heart rate and breath rate using wearable devices and/or other remote medical diagnosis equipment [1,2,3].

Respiratory rate during sleep is a major vital sign, related to the diagnosis and treatment of sleep disorders [4], and various studies have been conducted in this field [5,6,7,8,9,10,11,12,13]. Hussain et al. [6] attached radio-frequency identification (RFID) tags to various positions on the subjects’ shirts and measured the respiratory rate of a lying person using RFID signals received wirelessly. Moreover, Zhang et al. [7] developed a band-type wearable device that wraps around the abdomen. This device can monitor the respiratory rate by measuring changes in the circumference of the abdomen. Additionally, Massaroni et al. [8] used a red-green-blue (RGB) camera and a head-mounted device to measure respiratory rate.

Meanwhile, there are studies focused on non-contact and non-constrained measurement techniques which are able to replace accurate contact sensors because contact sensors are very inconvenient for users to monitor respiration continuously in daily life [9].

Unlike the conventional methods for imaging the human thorax, there is research that involves capturing the face with a thermal camera for measuring respiration. Rzucidlo et al. [14] successfully measured the respiration of wild animals using the Eulerian Video Magnification technique. Additionally, Kwasniewska et al. [15] analyzed faces captured by a thermal camera using deep learning to measure respiratory rate.

As a good attempt, Fang et al. [16] tried to measure breath by observing the sound generated during breathing using the microphone of a smartphone. This approach is very practical in that respiratory rate can be measured during sleep by simply placing a smartphone beside a bed.

The optical triangulation technique is widely known to have high accuracy despite the use of inexpensive sensors for depth measurement [17]. A general optical triangulation-based depth measurement technique uses a single sensor composed of a laser light source, an optical sensor, and a lens. The basic principle used inside this sensor is well described in [18]. When the laser beam in this sensor strikes an arbitrary point on the surface of an object, it is reflected from the surface and is then incident on the optical sensor. In accordance with basic geometric principles, the depth can be calculated. The basic formula is well described in [19].

This technique can be applied to the non-contact measurement of a person’s respiratory rate using depth information, and a previous study of Aoki et al. [4] measured human breathing by irradiating a near-infrared pattern laser over the entire chest area of a person. However, the optical triangulation technique has the disadvantage that the measurement area is limited because depth can be measured only in the area where the laser is irradiated. Therefore, Aoki et al. [4] studied this approach using a pattern laser which consisted of multiple lines to measure abdominal depth over the entire upper body. Similarly, Jezersk et al. [20] used a pattern laser which consisted of 33 parallel lines. However, the cost of a pattern laser increases exponentially compared to that of a single line laser without a pattern. Therefore, this study focus on the fact that if the camera and the laser were installed with a relatively large angle by irradiating toward a person from the opposite side, it would have a sufficient measurement area for respiration monitoring, even when using a single line laser, rather than a pattern laser, as shown in Fig. 1. We have studied the potential of this technique in previous studies [21].

Using this technique, the disadvantage of the narrowness of the measurement area that comes from using a single line laser would be eliminated, and respiratory rate could be assessed robustly for various body types.

Meanwhile, Paz-Reyes et al. [22] conducted a study of respiration measurement using only an RGB camera without a line laser. This technique might be more cost-effective than the proposed technique because it did not use a line laser. Since this technique detects feature points from patterns printed on clothes for extracting respiratory signals, its performance was significantly affected by the printed pattern of clothes the subjects wore [22].

In this paper, an algorithm-based optical triangulation method is proposed to use an RGB camera and a line laser to conveniently measure the respiratory rate of a human body, especially targeting the bed-sleeping condition. To validate the efficiency of the algorithm, the experimental results from 20 subjects are presented, and compare the data with those from a commercial patient monitoring system.

Fig. 1
figure 1

Expansion of the measurable plane by changing the method of applying optical triangulation

Optical triangulation technique

The technique of extracting depth information using optical triangulation is schematically shown in Fig. 2. The optical triangulation technique consists of a line laser and a camera irradiating the subject at the certain angles \(\theta\) and \(\theta ^{'}\), respectively. The image coordinate system acquired from the corresponding optical system is converted to the world coordinate system in real space. In other words, the depth (height of object) \(d_{world}\) can be acquired from line laser pixels in the image coordinate system (\(x_{image}\),\(y_{image}\)).

In particular, as shown in Fig. 2, when the line laser is tilted by \(\theta\) and the camera is tilted by \(\theta ^{'}\), the \(d_{world}\) of the subject is projected into \(d^{'}_{world}\) using Eq. 1.

$$\begin{aligned} d^{'}_{world}= d_{world}\times \tan ^{-1}(\theta ) \end{aligned}$$
(1)
Fig. 2
figure 2

Conceptual scheme of optical triangulation

Then \(d^{'}_{world}\) projects \(d^{'}\) in a virtual projection plane which is orthogonal with the camera. The virtual projection plane is just inserted for description. As the results are acquired, the reference point is determined as the highest depth \(d^{'}_{world}\) by the Eq. 2. to extract one dimensional depth signal, as in Fig. 7(3).

$$\begin{aligned} d^{'}=d^{'}_{world}\times \cos ^{- 1}(\pi /2-\theta ^{'}) \end{aligned}$$
(2)

In Fig. 2, since \(\theta ^{'}\) is \(\pi /2\), the cosine term is eliminated so that \(d^{'}\) and \(d^{'}_{world}\) are equal.

Finally the \(d^{'}_{image}\) can be calculated by multiplying by \(\alpha\), which is a scaling factor to convert from \(d^{'}\) to \(d_{image}\), as in Eq. 3.

$$\begin{aligned} d_{image}=\alpha \times d^{'} \end{aligned}$$
(3)

Therefore, if \(d_{image}\) were calculated using the image processing algorithm by segmenting line laser pixels in image, the depth information \(d_{world}\) can be acquired for images acquired in real time.

Proposed algorithm description

The image processing algorithm used in this study is summarized in Fig. 3. The first process, from “R-channel split” to “Threshold”, is to segment pixels corresponding to line laser area in acquired image. Next, to extract the depth signal, the reference point selection algorithm is applied to select an appropriate point for applying the optical triangulation technique among the segmented line laser pixels. After that, the depth signal is calculated by the equations in this paper, which will be explained later in detail.

For the depth signal processing, the depth signals with lengths exceeding the predefined frame length \(L_t\) are collected. Then, a difference operation with the previous sample and a moving average filter are applied to the depth level sequences calculated by the optical triangulation technique with the reference point in real-time. Finally, a zero crossing point detection algorithm is applied to the result of the moving average filter to count the number of respiration. Each step is described in the following sections.

Fig. 3
figure 3

Flowchart of the proposed algorithm

Image acquisition and R-channel extraction

The first step of the image processing algorithm following image acquisition is to extract the red(R)-channel from the three RGB channels, considering that the color of line laser is red. Typically, the optical triangulation technique uses a red-colored line laser, because longer wavelengths have less diffraction, so the straightness is stronger. In order to detect the red-colored line laser region in the image, a channel separation is performed. Split images by each channel from the input RGB image are shown below.

In Fig. 4, the split images (Fig. 4b-d) appear in grayscale. The Fig. 4b is the R-channel of the input image (Fig. 4a), Fig. 4c is the G-channel, Fig. 4d is B-channel. The R-channel image (Fig. 4b) is the most advantageous for line laser segmentation because the contrast between line laser pixels and surrounding pixels is the clearest.

Fig. 4
figure 4

Split images of an input RGB image

Median filter

A median filter reduces image noise whilst preserving object edges [23]. In the case of proposed research, the median filter reduced reflected spot-like noise while preserving the edge component of line laser.

Figure 5 shows the median filtering results. Figure 5a is a part of the R-channel image, and the results of applying the median filter with a window of \(3\times 3\) to the image are shown in Fig. 5b. In addition, the main areas of each image are zoomed-in and shown in Fig. 5c and d, respectively.

When comparing Fig. 5c and d, as shown in (1), the dark spot-like noise in the laser area is removed, and the connectivity of laser area is strengthened, while the spot-like noise (2) caused by reflected light is suppressed.

Fig. 5
figure 5

Results of median filter

Thresholding

Thresholding is the last step to segment line laser pixels. It is very important to select a threshold value properly.

Figure 6 shows the application of various thresholding methods for the results of median filter. In particular, Fig. 6a was observed brightly even though the brightness value of red circle area was not the laser area. Therefore, if the threshold value is incorrectly selected as shown in Fig. 6b, pixels in the red circle area in Fig. 6a would be over-detected. Figure 6c shows the Otsu [24] thresholding technique. The Otsu thresholding technique is a widely applied to images with two normal distributions, but as shown in Fig. 6c, both the laser area and also other areas are segmented. In this way, the appropriate threshold value changes depending on environmental conditions, such as the reflected light of the line laser. Therefore, a threshold selection algorithm is proposed that can obtain results such as Fig. 6d that are robust to environmental changes based on the following two facts, Firstly, the line laser area in an image is the brightest value. Secondly, the maximum number of line laser pixels is fixed and can be calculated based on the thickness of the line laser and the width of the image. For example, if the thickness of the line laser is 10 pixels and the width of image is 10 pixels, the maximum number of pixels of the segmented line laser cannot exceed \(100 (=10\times 10)\).

Fig. 6
figure 6

Application of various threshold algorithms

Using above two facts, the pseudocode of proposed thresholding is presented in Appendix A. First, thresholding is performed by applying the brightest value as the threshold value (T), and then the number of segmented pixels (C) is counted. Then, the threshold value is lowered by one step. This process repeats until the number of segmented pixels does not exceed a certain level (TC), and then the last threshold value (T) is selected as the final threshold value.

For all of the subsequent images acquired in real time, the threshold value is uniformly used for thresholding. Once for the proper threshold value is found, the searching process needs not to be repeated; the computation load in the respiration measurement could be light enough so that real-time processing is possible.

Optical triangulation technique

The purpose of this chapter is to obtain the depth signal, which is denoted by \(\{ d^{*}_{world} \}_{t}\), using an optical triangulation technique. At first, the depth \(d_{world}\) and the corresponding \(x_{world}\) coordinates are extracted from pixels that are segmented for the laser area using the optical triangulation technique, following Eq. 4.

$$\begin{aligned} & d_{world} = \alpha \times y_{segmentation} \nonumber \\ & x_{world} = \beta \times x_{segmentation} \end{aligned}$$
(4)

where \(\alpha\) and \(\beta\) are scaling factors, and (\(x_{segmentation}\),\(y_{segmentation}\)) are the segmented coordinates in an image.

For the input image as shown in Fig. 7a, the process of acquiring a depth signal is shown in Fig. 7b. The one-dimensional depth \(d_{world}\) can be represented with respect to \(x_{world}\) as shown in Fig. 7(1). Accumulating process \(d_{world}\) in chronological order for images input in real time is shown as Fig. 7(2).

Fig. 7
figure 7

Depth signal extraction processlabel

As the results are acquired, the reference point is determined as the highest depth \(d_{world}^{*}\) by the Eq. 5 to extract one dimensional depth signal, as in Fig. 7(3).

$$\begin{aligned} d_{world}^{*}=\max \limits _{x_{world}} {d_{world}} \end{aligned}$$
(5)

Then for the current time \(t = i\), the amplitude of the depth signal \(\{d_{world}^{*}\}_{t=i}\) is extracted, as shown in Fig. 7(3).

Depth signal processing

An example of the depth signal processing for one participant was depicted, as shown in the Fig. 8.

\(D_{i}\) is a periodic signal vibrating around zero, which is calculated using depth signals \(\{d_{world}^{*} \}_{t}\) by applying the difference operation (Eq. (6)) with the previous n-sample.

$$\begin{aligned} D_{i}=\{d_{world}^{*}\}_{t=i} - \{d_{world}^{*}\}_{t=i-n} \end{aligned}$$
(6)

Then the acquired \(D_{i}\) is smoothed, as shown in Fig. 8\((n = 20)\).

To get baseline flattening and more smoothed signal, a moving average filter [25] is applied using Eq. 7.

$$\begin{aligned} MA_{i} = 1/W \times \sum \limits _{w=0}^{W-1} D_{i-w} \end{aligned}$$
(7)

The result of the moving average filter \(MA_{i}\) is shown in Fig. 8\((W = 3)\).

To measure the respiratory rate, zero crossing points of which the derivative is negative are detected.

If \(MA_{i}\) is smaller than C, which is a constant number close to zero, and the previous sample, \(MA_{i-1}\), is equal to or greater than C, then the current i-sample is considered as the zero crossing point.

An example of the zero crossing point detection algorithm \(Z_{i}\) is shown in Fig. 8\((C = 0.5)\).

Fig. 8
figure 8

Results of each step in the depth signal processing algorithm

Experiments

Subjects

Twenty subjects, eight men and twelve women, participated in this experiment. All of them were relatively young healthy researchers from the Daegu-Gyeongbuk Medical Innovation Foundation aged between 27 and 45 years, and they volunteered through an internal web advertisement following approval from the institutional review board (IRB), which is the ethical committee of this institution and which is controlled by a governmental authority (control number: DGMIF-20200605-HR-001-01). By IRB policy, the volunteers from the same department as the authors of this work were excluded in the recruitment, while no special exclusion criteria were applied due to the nature of the experiment. The height range of the subjects was from 150 to 185cm.

Experimental setting and protocol

Figure 9 shows the overall system for the experiment. A webcam camera (LOGITECH960−001105KIT, Logitech International S.A., Lausanne, Switzerland) and a line laser (MLG633, Latech, Gimhae, South Korea) were attached on the head side and foot side of the bed, respectively, pointing toward the center of the bed. The webcam camera has a maximum resolution of \(3480\times 2160 (pixels)\) with 30s frame per second(fps), but the acquisition resolution set to \(2560\times 1440 (pixels)\) with at 15fps to guarantee real−time processing. The line laser irradiates in the wavelength of 655nm, and the width of laser line can be adjustable with a focus adjustment function. The optical power was 2.9mW and the operating voltage was \(7-24 VDC\).

Fig. 9
figure 9

A schematic illustration of the system

The webcam camera was installed at a height of 61cm perpendicular to the bed on the head side. The line laser was installed at a height of 58cm, and the angle between the webcam and the line laser was 102degrees, as will be explained later in the discussion section. These two devices were connected via USB 3.0 to a desktop computer running proposed algorithm. The proposed algorithm was implemented in a desktop computer (Microsoft Windows 10 64 bit, Intel Core-i5 CPU, 4G RAM) using C++ and OpenCV [26], an open-source computer vision library.

A commercial patient monitoring system (BPM-770, Bionics, South Korea) was also prepared to obtain the ground-truth data. The commercial system is the product approved as a medical device by Ministry of Food and Drug Safety (MFDS) of South Korea, and has the measurement range of \(2-150\) respiration per minute (rpm) with the accuracy of \(\pm\) \(2rpm\).

The algorithm parameters were set as follows. The window size of the median filter was \(3\times 3\) in Section Median Filter. The threshold value was automatically selected by the proposed thresholding algorithm in Section Thresholding. Both \(\alpha\) and \(\beta\) for Eq. 4 were set to ones experimentally. In general, in the 3D triangulation method, \(\alpha\) and \(\beta\) are used to convert from image pixel units to physical units in centimeters during coordinate transformation. By setting these parameters to one, signal processing could be performed in pixel units. Note that the change in the relative magnitude of the depth is being observed, not the absolute magnitude of the depth. After acquiring the depth signal for the first 50 frames (\(L_{t}\)) in Fig. 3, the depth signal processing algorithm was initiated and the difference operation was conducted between the current samples and samples before 20 frames (\(n = 20\) as explained in Section Depth Signal Processing). The moving average window size W was 3, and the constant value for the zero crossing point detection C was 0.5.

In the experiment, subjects signed the consent from and they were instructed to attach electrodes by themselves after detailed instructions and a demonstration from the skilled examiner. They decided whether to use a blanket or not, and lied on the bed in their own life styles (Fig. 10). The first row in Fig. 10 shows the various shapes of subjects and the laser line appearances in the experiment. The first three rows in the figure show the cases where subjects decided to use blankets, while the last row shows the cases without blankets. Whenever the subjects were ready, the measurement started with a verbal signal from the examiner and continued for 160s total.

Fig. 10
figure 10

Sample images of experiments: various body types, blankets, and clothes

Data from the proposed system were analyzed using a Python script by comparing it with data from a commercial patient monitoring system in terms of respiratory rate. Respiratory rate is usually given in the unit of respiration per minute (rpm), and one sample of data requires the measurement of 60s. Therefore, only windowed data of 100s can be analyzed, even though whole bio-signal is measured for the entire period of 160s that a subject lied down on the bed. The odd-numbered zero crossing points extracted from two systems were compared every two seconds to validate the real-time reliability of the proposed algorithm.

Results

Data from 20 subjects were successfully measured and analyzed. The results consistently demonstrated measurement errors within two-second intervals to be less than one rpm for all subjects and for all intervals and the RMSE for the entire measurement range of all subjects was also less than 1 rpm.

Table 1 Output comparison between BPM-770 and proposed system for the first subject

Table 1 shows the exemplary data from the first subject. It reports all measurements for 160s including the first 60s, which was not used in the calculation of the respiratory rate. The root-mean-squared error (RMSE) of rpm for all windowed intervals of 60s was 0.24rpm, and the maximum error of a two-second interval was one.

Table 2 Results of clinical trials in 20 patients

RMSE values for all 20 subjects are reported in the Table 2. As described earlier, all measurement errors for two-second intervals were no more than one. The average RMSE appeared as 0.30rpm, and the maximum RMSE was 0.76 for subject 8.

Average respiratory rate using proposed system resulted was 13.66(3.86)rpm (the number in the parenthesis is the standard deviation) for all subjects, while the result measured by the commercial system was 13.67(3.88)rpm. Following a Shapiro-Wilk test to check normality, a patent test showed \(p = 0.77\) (\(>0.05\)).

Discussion

In this study, The algorithm that uses an RGB camera and a line laser to conveniently measure the respiratory rate of a human body was proposed, especially targeting bed sleeping condition. In the experiment, although subjects had various body shapes and wore various blankets and clothes, the proposed algorithm showed reliable respiratory rate measurement accuracy compared to the commercial equipment (BPM-770, Bionics, South Korea) with the RMSE of 0.30rpm and the maximum error of 1rpm in Table 2. Since most governmental authorities for medical devices require a maximum error of \(\pm\) \(2rpm\) in respiratory monitoring, we can conclude the system performance was acceptable.

The proposed scheme has two main advantages over existing methods. Firstly, the combination of an RGB camera and a line laser is a very affordable combination, especially when we compare the proposed scheme with other schemes using RGBD cameras. Even though RGBD sensors are becoming more attractive in the market, there is still huge cost difference between the two. Also IR signals from RGBD cameras may interfere with other medical sensory signals (e.g., many clinical/surgical tracking systems utilize IR technology). Secondly, devices were located in the head and foot side of the bed, and there is no need to install a device on the ceiling.

In the experiment, we installed the webcam camera at a height of 61cm, and the line laser at 58cm, respectively, to generate 102degrees between two. Theoretically, when the angle between the camera and the laser is 90 degrees, depth measurement accuracy is the highest, if we do not consider the distance from the camera to the thorax. However, if we had set the angle to 90degrees, the camera should have been lifted up above 1 m, since we did not want the camera to be located on the ceiling. Then, due to the distance and the effect of the field of view of the camera, the data from camera would have significantly degraded. Therefore, we set the heights of devices to reasonable levels by trial and error. Probably this could be another variable that can be incorporated to the equation in the future research.

A few characteristics of the proposed approach were compared with those from other research (Table 3). Non-traditional respiration monitoring technology can be divided into contact and non-contact types. Among non-contact approaches, our approach demonstrates acceptable accuracy (as shown in Table 2) while utilizing relatively inexpensive sensors. Moreover, it was robust to the user conditions such as various clothes and blankets.

Table 3 Comparison with traditional methods

The limitation of proposed approach is that it is vulnerable to body motion. We found the measurement accuracy decreases severely with the body motion of subjects. Since there is no way to distinguish between body motions and bio-signals in the signal acquired using the optical triangulation technique, body motion could not be adequately compensated. However, most non-traditional technologies are also significantly affected by body motion especially when the approach is of the non-contact type. Among the latest techniques in Table 3, Hyun et al. [10] considered vigorous movements of subjects but reported the same problem with proposed approach. A large body motion could be detected with an additional sensor or with analyzing change of the shape of projected segment. We believe that temporal change of the shape even could hint us the direction and/or magnitude or motion, and it could be an interesting research topic following this work.

We also found a red-colored line laser may disturb the sleep of users in the real-life applications. They might better be replaced with an IR camera and an IR laser in a future study. An equivalent IR system could increase the overall cost of the system, but not too high. Body motion compensation is another future topic we need to investigate, since two minutes of respiratory rate data may be degraded or lost by abrupt body motion.

For future work, in order to apply the proposed system to telemedicine, we will set up a test bed in a hospital environment that is under construction. In this case, IEEE 11073 POCT standard protocol and Health Level 7 (HL7) protocol could be implemented to link to an electronic medical record (EMR) system.

Conclusions

In this research, we proposed a scheme to use an RGB camera and a line laser for non-contact and unrestrained respiration measurement of a human, especially targeting the bed-sleeping condition. Proposed study focused on an application of optical triangulation method, known as a cost-effective and accurate depth measurement method, to unconstrained respiration measurements. The system could be built cost-effectively, and the performance was verified through an experiment with twenty subjects under various conditions (with and without blankets). Even though the system tends to be temporarily unstable with the abrupt motion of a human body, data were accurate enough in normal, stable conditions when compared with those from a commercial patient monitoring system. Evaluation results show that the proposed method demonstrates non-inferior performance compared to a commercial patient monitoring system, with a root-mean-squared error of 0.30rpm and a maximum error of 1rpm (\(p > 0.05\)), which implies the proposed non-contact system can be a useful alternative to conventional healthcare. Considering the fast development of research on contactless bio-signal assessment, we believe such technologies will be in our homes soon.

Availability of data and materials

The datasets used and/or analysed during the current study available from the corresponding author on reasonable request.

References

  1. Valsalan P, Baomar TAB, Baabood AHO. IoT based health monitoring system. J Crit Rev. 2020;7. https://doi.org/10.31838/jcr.07.04.137.

  2. Rahaman A, Islam MM, Islam MR, Sadi MS, Nooruddin S. Developing iot based smart health monitoring systems: a review. Rev d’Intell Artif. 2019;33. https://doi.org/10.18280/ria.330605.a.

  3. Mohammed KI, Zaidan AA, Zaidan BB, Albahri OS, Alsalem M, Hadi A, Hashim M, Albahri AS. Real-time remote-health monitoring systems: a review on patients prioritisation for multiple-chronic diseases, taxonomy analysis, concerns and solution procedure. J Med Syst. 2017;43:435–40. https://doi.org/10.1007/s10916-019-1362-x.

    Article  Google Scholar 

  4. Aoki H, Koshiji K. Non-contact respiration monitoring method for screening sleep respiratory disturbance using slit light pattern projection. IFMBE Proc. 2007;14:680–3. https://doi.org/10.1007/978-3-540-36841-0_158.

    Article  Google Scholar 

  5. Rehouma H, Noumeir R, Essouri S, Jouvet P. Advancements in methods and camera-based sensors for the quantification of respiration. Sensors. 2020;20:7252. https://doi.org/10.3390/s20247252.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Hussain Z, Sagar S, Zhang WE, Sheng QZ. A cost-effective and non-invasive system for sleep and vital signs monitoring using passive RFID tags. In: Proceedings of the 16th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, Houston, TX, USA, 12-14 November 2019. pp. 153-161. https://doi.org/10.1145/3360774.3360797.

  7. Zhang Z, Zhang J, Zhang H, Wang H, Hu Z, Xuan W, Dong S, Luo J. A portable triboelectric nanogenerator for real-time respiration monitoring. Nanoscale Res Lett. 2019;14:354. https://doi.org/10.1186/s11671-019-3187-4.

    Article  PubMed  PubMed Central  Google Scholar 

  8. Massaroni C, Presti DL, Formica D, Silvestri S, Schena E. Non-contact monitoring of breathing pattern and respiratory rate via rgb signal measurement. Sensors. 2019;19:2758. https://doi.org/10.3390/s19122758.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Scalise L, Ercoli I, Marchionni P, Tomasini EP. Measurement of respiration rate in preterm infants by laser Doppler vibrometry. In Proceedings of the MeMeA 2011-2011 IEEE International Symposium on Medical Measurements and Applications, Proceedings, Bari, Italy, 30-31 May 2011. pp. 657-661. https://doi.org/10.1109/MeMeA.2011.5966740.

  10. Hyun BC, Park YH, Yun YU, Kim SS, Kim Y. Time-domain breathing measurement using IR-UWB radar. Proceedings of Symposium of the Korean Institute of communications and Information Sciences (KICS) Conference, Jeju, Korea, 21-23 June 2017; p. 1555–6.

  11. Kim JD, Lee WH, Lee Y, Lee HJ, Cha T, Kim SH, Song K-M, Lim Y-H, Cho SH, Cho SH, et al. Non-contact respiration monitoring using impulse radio ultrawideband radar in neonates. R Soc Open Sci. 2019;6(6):190149. https://doi.org/10.1098/rsos.190149.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Wijenayake U, Park SY. Real-time external respiratory motion measuring technique using an RGB-D camera and principal component analysis. Sensors. 2017;17:1840. https://doi.org/10.3390/s17081840.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Aoki H, Miyazaki M, Nakamura H, Furukawa R, Sagawa R, Kawasaki H. Non-contact respiration measurement using structured light 3-D sensor. Proceedings of SICE Annual Conference (SICE). Akita; 2012. pp. 614-618.

  14. Rzucidlo CL, Curry E, Shero MR. Non-invasive measurements of respiration and heart rate across wildlife species using Eulerian Video Magnification of infrared thermal imagery. BMC Biol. 2023;21:61. https://doi.org/10.1186/s12915-023-01555-9.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Kwasniewska A, Szankin M, Ruminski J, Kaczmarek M, Evaluating Accuracy of Respiratory Rate Estimation from Super Resolved Thermal Imagery. 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Berlin; 2019. pp. 2744–7. https://doi.org/10.1109/EMBC.2019.8857764. keywords: Estimation;Cameras;Signal resolution;Spatial resolution;Feature extraction;Training.

  16. Fang Y, Jiang Z, Wang H. A Novel Sleep Respiratory Rate Detection Method for Obstructive Sleep Apnea Based on Characteristic Moment Waveform. J Healthc Eng. 2018;2018. https://doi.org/10.1155/2018/1902176.

  17. Zolfaghari H, Khalili K. On-line 3D geometric model reconstruction. In: Lecture Notes in Computer Science. Ber-lin/Heidelberg: Springer; 2008. https://doi.org/10.1007/978-3-540-69387-1_16. Volume 5102 LNCS.

  18. Liu CS, Hu PH, Lin YC. Design and experimental validation of novel optics-based autofocusing microscope. Appl Phys B Lasers Opt. 2012;109:259–68. https://doi.org/10.1007/s00340-012-5171-x.

    Article  CAS  Google Scholar 

  19. Liu CS, Jiang SH. A novel laser displacement sensor with improved robustness toward geometrical fluctuations of the laser beam. Meas Sci Technol. 2013;24:105101. https://doi.org/10.1088/0957-0233/24/10/105101.

    Article  CAS  Google Scholar 

  20. Jezeršek M, Fležar M, Možina J. Laser multiple line triangulation system for real-time 3-D monitoring of chest wall dur-ing breathing. Stroj Vestn/J Mech Eng. 2008;54:7–8.

    Google Scholar 

  21. Jeong Y, Jung ES, Lee H, Park YS, Song C, Moon H, Son J. A laser area segmentation algorithm that is robust against environmental changes in the measurement area in a vision-based non-contact measurement system using optical triangu-lation for monitoring respiratory rate during sleep. J Rehabil Welf Eng Assist Technol. 2020;14(4):256-262. https://doi.org/10.21288/resko.2020.14.4.256.

  22. Paz-Reyes MEP, Dorta_Palmero J, Diaz JL, Aragon E, Taboada-Crispi A. Computer Vision-Based Estimation of Respiration Signals. IFMBE Proc. 2020;75:252-261. https://doi.org/10.1007/978-3-030-30648-9_33.

  23. Blackburn JA. Objective Image Analysis of Astroglial Morphology in Rstudio Following Systemic. Columbus: Ph.D. Thesis, Ohio State University; 2019.

  24. Nobuyuki O. A threshold selection method from gray-level histograms. IEEE Trans. 1979;9:62–6.

    Google Scholar 

  25. Golestan S, Ramezani M, Guerrero JM, Freijedo FD, Monfared M. Moving average filter based phase-locked loops: performance analysis and design guidelines. IEEE Trans Power Electron. 2014;29:2750–63. https://doi.org/10.1109/TPEL.2013.2273461.

    Article  Google Scholar 

  26. Bradski G, Kaehler A. Learning OpenCV. Beijing: O’Reilly Media; 2008.

    Google Scholar 

Download references

Acknowledgements

This study was supported by the UNI-CORE(University & National Institute COllaboration for REgional Innovation) funded by the Ministry of Science and ICT, Korea.

This work was supported by the Korea Medical Device Development Fund grant funded by the Korea government (the Ministry of Science and ICT, the Ministry of Trade, Industry and Energy, the Ministry of Health & Welfare, the Ministry of Food and Drug Safety) (Project Number: 1711174476, RS-2022-00141072).

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization, Y.J.; software, C.S.; formal analysis, S.L; Writing-review & editing, J.S. All authors have read and agreed to the published version of the manuscript.

Corresponding author

Correspondence to Jaebum Son.

Ethics declarations

Ethics approval and consent to participate

The ethics of this study were reviewed and approved by the Institutional Review Board of Daegu-Gyeongbuk Medical Innovation Foundation (control number: DGMIF-20200605-HR-001-01). All methods were carried out in accordance with relevant guidelines and regulations. All patients provided written informed consent.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jeong, Y., Song, C., Lee, S. et al. For a clinical application of optical triangulation to assess respiratory rate using an RGB camera and a line laser. BMC Med Imaging 24, 274 (2024). https://doi.org/10.1186/s12880-024-01448-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12880-024-01448-5

Keywords